where is the configuration file for the routing and gateway under centos? - networking

where is the configuration file for these result ?
route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
0.0.0.0 192.168.0.204 0.0.0.0 UG 0 0 0 eth0
there entries aren't found in the following files
/etc/sysconfig/network
/etc/sysconfig/network-scripts/route-eth0
when i update GATEWAY in /etc/sysconfig/network-scripts/ifcfg-eth0, but when i run
route -n
it gives me the old result:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
0.0.0.0 192.168.0.204 0.0.0.0 UG 0 0 0 eth0
where i should search to find these entries?

Please check "/etc/network/interfaces" file
Refer the following link:
http://www.cyberciti.biz/faq/centos-linux-add-route-command/
~Shiva

Related

Wifi has IP by DHCP but no internet access

I have installed a new USB Wifi network card in Debian 9.
After configuring it, the router assigns me an IP via DHCP but I don't have internet access.
It is the Alpha Network AWUS036NH (Ralink RT3070 Chipset) Wifi network card.
It is on a Debian 9 without a graphical environment.
I have installed the firmware-ralink package and it is using the rt2800usb driver.
I have tried the next commands:
iwconfig
eth1 no wireless extensions.
eth0 no wireless extensions.
wlan0 IEEE 802.11 ESSID:"CAMIONES"
Mode:Managed Frequency:2.437 GHz Access Point: 74:AC:B9:21:3C:E5
Bit Rate=1 Mb/s Tx-Power=20 dBm
Retry short limit:7 RTS thr:off Fragment thr:off
Encryption key:off
Power Management:off
Link Quality=70/70 Signal level=-37 dBm
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:1 Invalid misc:4 Missed beacon:0
lo no wireless extensions.
ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.80.4.2 netmask 255.255.255.0 broadcast 10.80.4.255
ether 4c:02:89:12:c0:be txqueuelen 1000 (Ethernet)
RX packets 5002 bytes 631414 (616.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5510 bytes 882802 (862.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xd0600000-d06fffff
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1 (Local Loopback)
RX packets 6146 bytes 509679 (497.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6146 bytes 509679 (497.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.200.18 netmask 255.255.255.0 broadcast 192.168.200.255
ether 00:c0:ca:5a:00:60 txqueuelen 1000 (Ethernet)
RX packets 8 bytes 1170 (1.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 58 bytes 7704 (7.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.80.4.1 0.0.0.0 UG 0 0 0 eth0
10.80.4.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0
192.168.200.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
traceroute -i wlan0 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 * * *
2 * * *
3 * * *
4 * * *
5 * * *
6 *^C
I have tried to add a static route so that when I use wlan0 it will find its gateway:
route add default gw 192.168.200.1 dev wlan0
The rule is added but it does not work and I also lose internet access through eth0
ping -c2 -I wlan0 www.google.fr
PING www.google.fr (216.58.209.67) from 192.168.200.18 wlan0: 56(84) bytes of data.
--- www.google.fr ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1032ms
Contents of the configuration files:
/etc/resolv.conf
nameserver 80.58.61.250
nameserver 8.8.8.8
nameserver 80.58.61.254
/etc/network/interfaces.d/wlan0
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-ssid CAMIONES
wpa-psk pass
gateway 192.168.200.1
dns-nameservers 192.168.200.1
/etc/wpa_supplicant/wpa_supplicant.conf
network={
ssid="CAMIONES"
psk="pass"
}
I have tried connecting to another router and have the same problem.
What problem can I have with the configuration?
Thank you very much.
Your default route is set to go out via eth0 so all traffic will leave the eth0 interface, unless you have a specific(non default) route set to go out via wlan0.
Try this and see if you get a response:
route add -net 8.8.8.0 netmask 255.255.255.0 gw 192.168.200.1 dev wlan0
ping 8.8.8.8

How does Kubernetes assign an IP to fieldPath: status.hostIP on a host with multiple interfaces and IPs

The title says it all; how does Kubernetes assign an IP to fieldPath: status.hostIP on a host with multiple interfaces and IPs.
If My node has the following IPs
# ip a | grep "inet "
inet 127.0.0.1/8 scope host lo
inet 10.68.48.206/22 brd 10.68.51.255 scope global virbr0
inet 253.255.0.35/24 brd 253.255.0.255 scope global bond0.3900
inet 10.244.2.0/32 scope global flannel.1
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
Kube picks 10.68.48.206 when I want it to pick 253.255.0.35, so how does it decide?
Is it based off of DNS hostname resolution?
nslookup ca-rain03
Server: 10.68.50.60
Address: 10.68.50.60#53
Or default route?
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.68.48.1 0.0.0.0 UG 0 0 0 virbr0
10.0.0.0 10.68.48.1 255.0.0.0 UG 0 0 0 virbr0
10.68.48.0 0.0.0.0 255.255.252.0 U 0 0 0 virbr0
10.244.0.0 10.244.0.0 255.255.255.0 UG 0 0 0 flannel.1
10.244.1.0 10.244.1.0 255.255.255.0 UG 0 0 0 flannel.1
169.254.0.0 0.0.0.0 255.255.0.0 U 1007 0 0 bond0
169.254.0.0 0.0.0.0 255.255.0.0 U 1045 0 0 bond0.3900
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
253.255.0.0 0.0.0.0 255.255.255.0 U 0 0 0 bond0.3900
Or something else? How can I pass the host IP of 253.255.0.35 into a pod?
Thanks
It's really picked up by the kubelet the configuration. For example, on pretty much all *nix systems it's managed by systemd. So you can see it like this 👀:
systemctl cat kubelet
# Warning: kubelet.service changed on disk, the version systemd has loaded is outdated.
# This output shows the current version of the unit's original fragment and drop-in files.
# If fragments or drop-ins were added or removed, they are not properly reflected in this output.
# Run 'systemctl daemon-reload' to reload units.
# /lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet
Restart=always
StartLimitInterval=0
# Tuned for local dev: faster than upstream default (10s), but slower than systemd default (100ms)
RestartSec=600ms
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/c
onfig.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 👈
[Install]
You can see the node IP is identified with the --node-ip=172.17.0.2 kubelet option. 💡
✌️☮️
OK so there must have been something wierd in my k8 config. It is working as expected now, and status.hostIP is returning the correct IP

ICMP packets can't be forwarded

Machine A 10.167.27.10
Route on A is like:
0.0.0.0 10.167.27.1 0.0.0.0 UG 0 0 0 eth0
10.167.28.20 10.167.27.11 255.255.255.255 UGH 0 0 0 eth0
10.167.27.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
Machine B 10.167.27.11
Route on B is like:
default 10.167.27.1 0.0.0.0 UG 0 0 0 eth0
10.167.27.0 * 255.255.255.0 U 0 0 0 eth0
The ip_forward and accept_redirects is set to 1 on B.
And machine C is 10.167.28.20. I can ping C from B.
Since there is a route for C using B as a gateway on A, I thought if I ping C from A, I would get ICMP redirect message. But in fact I didn't get any reply.
I took tcpdump on eth0 on B, and saw ICMPs arrived. But why they were not forwarded?
Update:
On B I used Netfilter hook functions to output logs while the ICMP packets being processed. The log I added to the FORWARD hook point was printed and the routing result was correct. But the log at the POST ROUTING hook point was not printed. I'm confused...
Machine C know how to reply to A ?
Machine A know how to reach C (that's why you see the IMCP arrive at B) but if device C does not know how to reach A it will not respond to A (and the ping will fail).
Try to ping A from C. If it fail, add a route to machine A in device C.
Hope this help.
Edit
This is your topology right ?
Since B is connected to all other devices it will know how to reach them. You juste has to enable forwarding on B and define the default gateway of A and C as B.

Why starting a docker container changes the host's default route?

I've configured my host with the following routing table:
user#host:~ $ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
{VPN SERVER IP} 192.168.2.1 255.255.255.255 UGH 0 0 0 wlan0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
So that without being connected to the VPN I'm not connected to the internet:
user#host:~ $ ping google.com
connect: Network is unreachable
As soon as I start my docker container the host's routing table changes to:
user#host:~ $ netstat -rn
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 wlan0
{VPN SERVER IP} 192.168.2.1 255.255.255.255 UGH 0 0 0 wlan0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 vethcbeee28
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
And I'm connected to the internet again:
user#host:~ $ ping google.com
PING google.com (216.58.212.238) 56(84) bytes of data.
Basically my host shouldn't be able to connect to the internet without being connected to the VPN. But, starting the container sets the default route to my gateway again.
Does somebody know what's going on here? And, how to avoid that?
So far I found a workaround here which I'd like to avoid anyway.
EDIT:
I just found out that this happens even building an image from a dockerfile!
I was facing the same problem, and finally found a solution:
# Stop and disable dhcpcd daemon on system boot since we going to start it manually with /etc/rc.local
# NB: we do so, cause 'docker' when building or running a container sets up a 'bridge' interface which interferes 'failover'
systemctl stop dhcpcd
systemctl disable dhcpcd
# Start dhcpcd daemon on each interface we are interested in
dhcpcd eth0
dhcpcd eth1
dhcpcd wlan0
# Start dhcpcd daemon on every reboot
sed -i -e 's/^exit 0$//g' /etc/rc.local
echo "dhcpcd eth0" >> /etc/rc.local
echo "dhcpcd eth1" >> /etc/rc.local
echo "dhcpcd wlan0" >> /etc/rc.local
echo "" >> /etc/rc.local
echo "exit 0" >> /etc/rc.local
I also added dns servers for docker (probably, not necessary)
cat >> /etc/docker/daemon.json << EOF
{
"dns": ["8.8.8.8", "8.8.4.4"]
}
EOF
service docker restart
You can specify the nogateway option in the /etc/dhcpd.conf file.
# Avoid to set the default routes.
nogateway

Hitting resources in a private network from within a Docker container using VPN

I'm running Docker 1.9.1 on OSX, and I'm connected to my private work network with Cisco AnyConnect VPN. A service that I'm running in a Docker container connects to a DB within the work network, and is unreachable from within the container, but reachable from outside the container in OSX. It's also reachable from within the container if I'm connected directly to the work network, not through VPN. I suspect I may have to do some network configuration with the docker-machine VM, but I'm not sure where to go from here.
If you are using Virtualbox as your hypervisor for the docker-machines, I suggest you set your network mode as Bridged Adapter. This way your VM will be connected to the network individually just like your own machine. Also to gather more information for troubleshooting try pinging the db host machine from the container machine command line. use docker exec -it <container-name> /bin/bash
I ran into this problem today and got AnyConnect to work without the need for split tunneling or a different VPN client like OpenConnect. All it took was a bit of port forwarding.
My Setup
MacOS Sierra 10.12
VirtualBox 5.0.26
Docker ToolBox 1.12.2
docker-vpn-helper script located at https://gist.github.com/philpodlevsky/040b44b2f8cee750ecc308271cb8d1ab
Instructions
The above software configuration was utilized when tested.
Make sure you don't have any VMs running and you are disconnected from the VPN.
Modify line 47 to either specify your insecure registry or delete the "--engine-insecure-registry :5000" parameter.
Execute the following in a shell on your Mac:
sudo launchctl unload /System/Library/LaunchDaemons/org.ntp.ntpd.plist
Workaround for MacOS Sierra. For some reason having NTP enabled causes the docker engine to hang. See:
https://forums.docker.com/t/docker-beta-for-mac-does-not-work-and-hangs-frequently-on-macos-10-12/18109/7
./docker-vpn-helper
Sets up the port forwarding, regenerates TLS certificates.
Pay attention to the following lines emitted by the script you will need to cut and paste them into your shell.
export DOCKER_HOST=tcp://localhost:2376
export DOCKER_CERT_PATH=/Users/<username>/.docker/machine/machines/default
export DOCKER_MACHINE_NAME=default
Connect to your AnyConnect VPN and test out docker:
docker run hello-world
Check your routing inside the Docker Machine VM with
docker-machine ssh default
$ route -n
which looks like this on a fresh machine:
docker#default:~$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 1 0 0 eth0
10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.99.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
If you've created a lot of networks, i.e. by using docker-compose it might have created routes to stacks, which conflict with your VPN or local network routes.
docker#dev15:~$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 1 0 0 eth0
10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-7400365dbd39
172.25.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-4db568a601b4
[...]
192.168.80.0 0.0.0.0 255.255.240.0 U 0 0 0 br-97690a1b4313
192.168.105.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
TL;dr
It should be safe to remove all networks, with
docker network rm $(docker network ls -q)
since active networks are not removed by default ... but nonetheless be careful when running rm commands :)

Resources