Vagrant CentOS guest's IP is not resolved in Arch Linux host - networking

I had a vagrantfile working (here).
But after updating my Linux version (4.4 to 4.8). It stopped resolving the guest address (192.168.0.210). I do not know if this could be the cause.
I installed VirtualBox and Kernel modules for Linux 4.8.
These are the versions I'm using:
Manjaro Linux (Arch Linux)
Kernel 4.4 or 4.8 (I tried with both)
Vagrant 1.9.1
VirtualBox 5.1.10 r112026
When I ping the guest, this is what I get:
$ ping 192.168.0.210
PING 192.168.0.210 (192.168.0.210) 56(84) bytes of data.
From 192.168.0.1 icmp_seq=1 Destination Host Unreachable
From 192.168.0.1 icmp_seq=2 Destination Host Unreachable
From 192.168.0.1 icmp_seq=3 Destination Host Unreachable
From 192.168.0.1 icmp_seq=4 Destination Host Unreachable
From 192.168.0.1 icmp_seq=5 Destination Host Unreachable
However, if I ping the network address directly, it suceed:
$ ping 192.168.0.1
PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data.
64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=0.034 ms
64 bytes from 192.168.0.1: icmp_seq=2 ttl=64 time=0.036 ms
64 bytes from 192.168.0.1: icmp_seq=3 ttl=64 time=0.036 ms
64 bytes from 192.168.0.1: icmp_seq=4 ttl=64 time=0.035 ms
64 bytes from 192.168.0.1: icmp_seq=5 ttl=64 time=0.034 ms
This is the vbox network:
vboxnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.1 netmask 255.255.255.0 broadcast 192.168.0.255
inet6 fe80::800:27ff:fe00:0 prefixlen 64 scopeid 0x20<link>
ether 0a:00:27:00:00:00 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 68 bytes 8598 (8.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
On the guest. These are the network interfaces:
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:d8:71:80 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 85612sec preferred_lft 85612sec
inet6 fe80::5054:ff:fed8:7180/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:20:77:0b brd ff:ff:ff:ff:ff:ff
inet6 fe80::a00:27ff:fe20:770b/64 scope link
valid_lft forever preferred_lft forever
The firewall was disabled on host and guest while I was testing it.
Any clue?
Thanks!

Ok, it is a bug with Vagrant 1.9.1 (reverting to 1.9.0 works ok).
https://github.com/mitchellh/vagrant/issues/8142

Related

Proxmox can't receive HTTP response but can make ICMP pings

I've recently setup proxmox VE 6.2
I've two network adapters, one is a LAN network and other is a WAN network (USB RNDIS)
I've setup pfSense as a VM, as in the netgate docs I've created two bridges for WAN and LAN with those two physical NICs.
Everything is going fine, pfSense works as expected all lan clients can access the internet flawlessly through the pfSense VM.
But the issue is, proxmox can't make HTTP requests, I know it's weird. It can successfully access the internet, like I can make pings to 1.1.1.1 or any public available IP.
I tried like this
curl -vvv google.com
this is the ouput I got and this is where it's getting stuck, all HTTP connection acts the same way
* Trying 216.58.197.46...
* TCP_NODELAY set
* Expire in 149896 ms for 3 (transfer 0x55772a88ddc0)
* Expire in 200 ms for 4 (transfer 0x55772a88ddc0)
* Connected to google.com (216.58.197.46) port 80 (#0)
> GET / HTTP/1.1
> Host: google.com
> User-Agent: curl/7.64.0
> Accept: */*
And it's stuck there and times out after a while. Can't make apt update either. It seems to get connected but can't receive the response back.
This is the ping response
ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=75.4 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=74.7 ms
no issues there.
This is one hell of a weird issue, I've never faced before.
ip route list
default via 192.168.0.1 dev vmbr0 onlink
192.168.0.0/24 dev vmbr0 proto kernel scope link src 192.168.0.114
192.168.1.0/24 dev vmbr2 proto kernel scope link src 192.168.1.102
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp14s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
link/ether 3c:07:71:55:54:6e brd ff:ff:ff:ff:ff:ff
3: enx0c5b8f279a64: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
link/ether 0c:5b:8f:27:9a:64 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 3c:07:71:55:54:6e brd ff:ff:ff:ff:ff:ff
inet 192.168.0.114/24 brd 192.168.0.255 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::3e07:71ff:fe55:546e/64 scope link
valid_lft forever preferred_lft forever
5: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 0c:5b:8f:27:9a:64 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.102/24 brd 192.168.1.255 scope global dynamic vmbr2
valid_lft 84813sec preferred_lft 84813sec
inet6 fe80::e5b:8fff:fe27:9a64/64 scope link
valid_lft forever preferred_lft forever
6: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
link/ether 5a:1e:56:2a:0d:fe brd ff:ff:ff:ff:ff:ff
7: tap100i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
link/ether a2:fe:d5:1d:43:8f brd ff:ff:ff:ff:ff:ff
iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Proxmox IP - 192.168.0.114 (Static configured)
pfSense Gateway IP - 192.168.0.1
WAN (Internal IP) - 192.168.1.101
vmbr0 - LAN bridge
vmbr2 - WAN bridge
you should probably Disable Hardware Checksum Offloading.
this worked for me on a virtualized hardware. (HVM).
see this post:
https://askubuntu.com/questions/597894/can-ping-but-cannot-wget-on-host-with-bridge-interface

How to get the name resolution working on an Ubuntu 19.04 VirtualBox VM?

I'm using multiple VirtualBox Ubuntu 18.10/19.04 VMs on a Windows 7 host. At one moment on one of them the name resolution stopped working. The connection to the internet is still working.
ax#buildvm:~$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=51 time=40.5 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=51 time=35.5 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=51 time=42.4 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=51 time=36.2 ms
^C
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 35.456/38.635/42.408/2.906 ms
ax#buildvm:~$ ping google.com
ping: google.com: Temporary failure in name resolution
How to get the name resolution working?
additional info
ax#buildvm:~$ ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:fb:bc:af brd ff:ff:ff:ff:ff:ff
inet 10.0.2.5/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 947sec preferred_lft 947sec
inet6 fe80::a00:27ff:fefb:bcaf/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:27:32:88 brd ff:ff:ff:ff:ff:ff
inet 192.168.56.106/24 brd 192.168.56.255 scope global dynamic enp0s8
valid_lft 947sec preferred_lft 947sec
inet6 fe80::a00:27ff:fe27:3288/64 scope link
valid_lft forever preferred_lft forever
ax#buildvm:~$ cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0
search fritz.box
This blog article provides the solution:
$ sudo rm /etc/resolv.conf
$ sudo ln -s /var/run/systemd/resolve/resolv.conf /etc/resolv.conf
$ sudo systemctl restart systemd-resolved.service

Why can I ping the Ip of a different Network Interface of my server?

I have my local Machine (10.0.0.2/16) directly connected to the eth4 network interface of my server.
The connection works as expected and I can traceroute the ip of eth4, namely 10.0.0.1.
However, I can also traceroute the ip 10.1.0.23 of the other interface (eth5), even though it is on the wrong subnet!
In the following you see the settings of my local machine and my server.
On my local Machine (Arch Linux)
Output of ip addr:
....
2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 3c:97:0e:8a:a1:5a brd ff:ff:ff:ff:ff:ff
inet 10.0.0.2/16 brd 10.0.255.255 scope global enp0s25
valid_lft forever preferred_lft forever
inet6 fe80::7a0b:adb3:2eef:a3a8/64 scope link
valid_lft forever preferred_lft forever
....
Traceroutes
% sudo traceroute -I 10.0.0.1
traceroute to 10.0.0.1 (10.0.0.1), 30 hops max, 60 byte packets
1 10.0.0.1 (10.0.0.1) 0.184 ms 0.170 ms 0.163 ms
% sudo traceroute -I 10.1.0.23
traceroute to 10.1.0.23 (10.1.0.23), 30 hops max, 60 byte packets
1 10.1.0.23 (10.1.0.23) 0.240 ms 0.169 ms 0.166 ms
On Server (Debian)
My /etc/network/interfaces.
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
#iface eth5 inet dhcp
auto eth5
allow-hotplug eth5
iface eth5 inet static
address 10.1.0.23
netmask 255.255.0.0
gateway 10.1.0.1
## Automatically load eth4 interface at boot
auto eth4
allow-hotplug eth4
# Configure network interface at eth4
iface eth4 inet static
address 10.0.0.1
netmask 255.255.0.0
gateway 10.0.0.1
Output of ip addr:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
...
6: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:08:a2:0a:e8:86 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/16 brd 10.0.255.255 scope global eth4
valid_lft forever preferred_lft forever
inet6 fe80::208:a2ff:fe0a:e886/64 scope link
valid_lft forever preferred_lft forever
7: eth5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 00:08:a2:0a:e8:87 brd ff:ff:ff:ff:ff:ff
inet 10.1.0.23/16 brd 10.1.255.255 scope global eth5
valid_lft forever preferred_lft forever
Output of ip route:
default via 10.1.0.1 dev eth5
10.0.0.0/16 dev eth4 proto kernel scope link src 10.0.0.1
10.1.0.0/16 dev eth5 proto kernel scope link src 10.1.0.23
Why wouldn't you expect this behavior. As you can see from your Debian server's routing tables, it knows how to route packets to your arch linux machine, so it can respond if it wants to.
I can see two likely questions you might be having:
Why does it choose to respond?
You haven't given us your firewall rules, or told us whether your server has ip_forwarding enabled. Even without IP forwarding enabled, Linux will see a locally received packet for any of its local addresses as an INPUT packet (in terms of iptables and access control decisions), not a forwarded packet. So it will respond even if forwarding is disabled.
If you don't want this behavior you could add an iptables rule to the INPUT chain to drop the packet being received on the server.
Why is there only one hop in the traceroute
You might expect that in order to respond the packet would need to traverse (be forwarded) and so you would get two hops in your traceroute one for eth4 and one for eth5. However, as mentioned above, any locally received ppacket will be treated as input if it matches one of the local IPs. Your arch linux box presumably uses the Debian server as its default route. So, it sends a packet with the Debian server's MAC address, hoping the Debian server will forward it. That makes it a locally received packet at the ethernet level on the Debian serevr. The server then cehcks teh IP address, finds it is local, doesn't care it's for another ethernet and locally receives it at the IP layer.
If you don't want that behavior, fix in firewall rules.

why failed to forward public ip to docker NAT ip

For example, on the physical machine:
# ip addr
5: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 ..
inet 10.32.230.90/24 scope global eth2
valid_lft forever preferred_lft forever
inet 10.32.230.61/24 scope global secondary eth2
valid_lft forever preferred_lft forever
8: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
link/ether 02:42:65:1b:b0:25 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
"10.32.230.90" is the main IP of this machine, and "10.32.230.61" is secondary added with "ip addr add 10.32.230.61/24 dev eth2".
After creating a docker instance, with IP = 172.17.0.10, I add the following rules to connect native IP with secondary IP:
# iptables -A POSTROUTING -j MASQUERADE
# iptables -t nat -A PREROUTING -d 10.32.230.61 -j DNAT --to 172.17.0.10
# echo 1 > /proc/sys/net/ipv4/ip_forward
But it doesn't work because external PC still cannot get access to 10.32.230.61, but can get access to 10.32.230.90. What's the solution?
(From a certain PC, which IP is, for example, 10.32.230.95)
# ping 10.32.230.90
PING 10.32.230.90 (10.32.230.90) 56(84) bytes of data.
64 bytes from 10.32.230.90: icmp_seq=1 ttl=52 time=280 ms
64 bytes from 10.32.230.90: icmp_seq=2 ttl=52 time=336 ms
^C
# ping 10.32.230.61
(Timeout..)
I am sure that there is no IP confliction: 10.32.230.61 is not used by any other hosts.

ipv6 i can't connect from the outside

I'm testing IPv6 networking (using FreeBSSD .0, VMWare, NAT), but I can't connect from outside (localhost) via an IPv6 address(using IPv4, it works fine). How can I set up the network properly?
[root# /home/osmund]# cat /etc/rc.conf
hostname=""
ipv6_activate_all_interfaces="YES"
ifconfig_em1_ipv6="inet6 2001:db8:1::1 prefixlen 64"
#ipv6_enable="YES"
ipv6_network_interface="em1"
ifconfig_le0="DHCP"
sshd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
[root# /home/osmund]# ifconfig
em1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=8<VLAN_MTU>
ether 00:0c:29:8f:45:74
inet6 2001:db8:1::1 prefixlen 64
inet6 fe80::20c:29ff:fe8f:4574%em1 prefixlen 64 scopeid 0x2
inet 192.168.124.133 netmask 0xffffff00 broadcast 192.168.124.255
nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
media: Ethernet autoselect
status: active
plip0: flags=8810<POINTOPOINT,SIMPLEX,MULTICAST> metric 0 mtu 1500
nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=3<RXCSUM,TXCSUM>
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x4
inet 127.0.0.1 netmask 0xff000000
nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
[root# /home/osmund]# ping6 2001:db8:1::1
PING6(56=40+8+8 bytes) 2001:db8:1::1 --> 2001:db8:1::1
16 bytes from 2001:db8:1::1, icmp_seq=0 hlim=64 time=0.529 ms
16 bytes from 2001:db8:1::1, icmp_seq=1 hlim=64 time=0.133 ms
^C
--- 2001:db8:1::1 ping6 statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 0.133/0.331/0.529/0.198 ms
[root# /home/osmund]#
Have you tried to use bridged network instead?

Resources