VMs are not getting static IP from Vagrantfile - networking

I'm beginner in networking stuff and also I'm just starting with VMs.
I'm doing examples from "Ansible for Devops" and in chapter 3, I'm supposed to create three VMs and set a private network with static ip.
My Vagrant file looks like that:
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "geerlingguy/centos7"
config.ssh.insert_key = false
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.provider :virtualbox do |v|
v.memory = 256
v.linked_clone = true
end
config.vm.define "app1" do |app|
app.vm.hostname = "orc-app1.dev"
app.vm.network :private_network, ip: "192.168.60.4"
end
config.vm.define "app2" do |app|
app.vm.hostname = "orc-app2.dev"
app.vm.network :private_network, ip: "192.168.60.5"
end
config.vm.define "db" do |db|
db.vm.hostname = "orc-db.dev"
db.vm.network :private_network, ip: "192.168.60.6"
end
end
Vagrant loges:
❯ vagrant up
Bringing machine 'app1' up with 'virtualbox' provider...
Bringing machine 'app2' up with 'virtualbox' provider...
Bringing machine 'db' up with 'virtualbox' provider...
==> app1: Cloning VM...
==> app1: Matching MAC address for NAT networking...
==> app1: Checking if box 'geerlingguy/centos7' is up to date...
==> app1: Setting the name of the VM: 3_app1_1485309004899_30536
==> app1: Fixed port collision for 22 => 2222. Now on port 2202.
==> app1: Clearing any previously set network interfaces...
==> app1: Preparing network interfaces based on configuration...
app1: Adapter 1: nat
app1: Adapter 2: hostonly
==> app1: Forwarding ports...
app1: 22 (guest) => 2202 (host) (adapter 1)
==> app1: Running 'pre-boot' VM customizations...
==> app1: Booting VM...
==> app1: Waiting for machine to boot. This may take a few minutes...
app1: SSH address: 127.0.0.1:2202
app1: SSH username: vagrant
app1: SSH auth method: private key
app1: Warning: Remote connection disconnect. Retrying...
==> app1: Machine booted and ready!
==> app1: Checking for guest additions in VM...
==> app1: Setting hostname...
==> app1: Configuring and enabling network interfaces...
==> app2: Cloning VM...
==> app2: Matching MAC address for NAT networking...
==> app2: Checking if box 'geerlingguy/centos7' is up to date...
==> app2: Setting the name of the VM: 3_app2_1485309032690_32260
==> app2: Fixed port collision for 22 => 2222. Now on port 2203.
==> app2: Clearing any previously set network interfaces...
==> app2: Preparing network interfaces based on configuration...
app2: Adapter 1: nat
app2: Adapter 2: hostonly
==> app2: Forwarding ports...
app2: 22 (guest) => 2203 (host) (adapter 1)
==> app2: Running 'pre-boot' VM customizations...
==> app2: Booting VM...
==> app2: Waiting for machine to boot. This may take a few minutes...
app2: SSH address: 127.0.0.1:2203
app2: SSH username: vagrant
app2: SSH auth method: private key
app2: Warning: Remote connection disconnect. Retrying...
==> app2: Machine booted and ready!
==> app2: Checking for guest additions in VM...
==> app2: Setting hostname...
==> app2: Configuring and enabling network interfaces...
==> db: Cloning VM...
==> db: Matching MAC address for NAT networking...
==> db: Checking if box 'geerlingguy/centos7' is up to date...
==> db: Setting the name of the VM: 3_db_1485309060266_65663
==> db: Fixed port collision for 22 => 2222. Now on port 2204.
==> db: Clearing any previously set network interfaces...
==> db: Preparing network interfaces based on configuration...
db: Adapter 1: nat
db: Adapter 2: hostonly
==> db: Forwarding ports...
db: 22 (guest) => 2204 (host) (adapter 1)
==> db: Running 'pre-boot' VM customizations...
==> db: Booting VM...
==> db: Waiting for machine to boot. This may take a few minutes...
db: SSH address: 127.0.0.1:2204
db: SSH username: vagrant
db: SSH auth method: private key
db: Warning: Remote connection disconnect. Retrying...
==> db: Machine booted and ready!
==> db: Checking for guest additions in VM...
==> db: Setting hostname...
==> db: Configuring and enabling network interfaces...
And Vagrant SSH-config:
Host app1
HostName 127.0.0.1
User vagrant
Port 2202
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/mst/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
Host app2
HostName 127.0.0.1
User vagrant
Port 2203
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/mst/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
Host db
HostName 127.0.0.1
User vagrant
Port 2204
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/mst/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
So as You can see the machines didn't get those static ips I set for them and I can't connect to them using it. They just got a localhost IP and some high ports. In that example, I should work on that machines using ansible and use that static ips in the inventory file, so they should have it set correctly.
Any ideas?
macOS Sierra
Vagrant 1.9.1
VirtualBox 5.1.14
Thanks
EDIT: The machines are using CentOS and ip addr output is:
[root#orc-app1 vagrant]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:dd:23:fa brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 86067sec preferred_lft 86067sec
inet6 fe80::a00:27ff:fedd:23fa/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether 08:00:27:4d:38:fc brd ff:ff:ff:ff:ff:ff

Try with vagrant 1.9.0 . My co worker had an issue with it that nfs shares would not mount corectly if 1.9.1 and it related to the fact that the box didn't add one necessary interface automatically.
Downgrading to 1.9.0 fixed this.
There are couple of open issues on the vagrants github and they relate to rhel/centos 7 specifically.
This is one of them https://github.com/mitchellh/vagrant/issues/8138

I reviewed based on the example - the file for the network interface has been correctly created by vagrant
[vagrant#orc-app2 ~]$ cd /etc/sysconfig/network-scripts
[vagrant#orc-app2 network-scripts]$ ll
total 236
-rw-r--r--. 1 root root 353 25 janv. 16:06 ifcfg-enp0s3
-rw-------. 1 vagrant vagrant 214 25 janv. 16:06 ifcfg-enp0s8
and the content for this new network interface is correct
[vagrant#orc-app2 network-scripts]$ more ifcfg-enp0s8
#VAGRANT-BEGIN
# The contents below are automatically generated by Vagrant. Do not modify.
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.60.5
NETMASK=255.255.255.0
DEVICE=enp0s8
PEERDNS=no
#VAGRANT-END
so I just restarted the network services to try
[vagrant#orc-app2 network-scripts]$ sudo systemctl restart network
and it was ok
[vagrant#orc-app2 network-scripts]$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:dd:23:fa brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 86391sec preferred_lft 86391sec
inet6 fe80::a00:27ff:fedd:23fa/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:42:83:e9 brd ff:ff:ff:ff:ff:ff
inet 192.168.60.5/24 brd 192.168.60.255 scope global enp0s8
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe42:83e9/64 scope link
valid_lft forever preferred_lft forever
I dont have another centos7 box to test (still working nicely with 6) to confirm its an issue with this box or with the new centos

Related

Kubernetes, can't reach other node services

I'm playing with Kubernetes inside 3 VirtualBox VMs with CentOS 7, 1 master and 2 minions. Unfortunately installation manuals say something like every service will be accessible from every node, every pod will see all other pods, but I don't see this happening. I can access the service only from that node where the specific pod runs. Please help to find out what am I missing, I'm very new to Kubernetes.
Every VM has 2 adapters: NAT and Host-only. Second one has IPs 10.0.13.101-254.
Master-1: 10.0.13.104
Minion-1: 10.0.13.105
Minion-2: 10.0.13.106
Get all pods from master:
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default busybox 1/1 Running 4 37m
default nginx-demo-2867147694-f6f9m 1/1 Running 1 52m
default nginx-demo2-2631277934-v4ggr 1/1 Running 0 5s
kube-system etcd-master-1 1/1 Running 1 1h
kube-system kube-apiserver-master-1 1/1 Running 1 1h
kube-system kube-controller-manager-master-1 1/1 Running 1 1h
kube-system kube-dns-2425271678-kgb7k 3/3 Running 3 1h
kube-system kube-flannel-ds-pwsq4 2/2 Running 4 56m
kube-system kube-flannel-ds-qswt7 2/2 Running 4 1h
kube-system kube-flannel-ds-z0g8c 2/2 Running 12 56m
kube-system kube-proxy-0lfw0 1/1 Running 2 56m
kube-system kube-proxy-6263z 1/1 Running 2 56m
kube-system kube-proxy-b8hc3 1/1 Running 1 1h
kube-system kube-scheduler-master-1 1/1 Running 1 1h
Get all services:
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.96.0.1 <none> 443/TCP 1h
nginx-demo 10.104.34.229 <none> 80/TCP 51m
nginx-demo2 10.102.145.89 <none> 80/TCP 3s
Get Nginx pods IP info:
$ kubectl get pod nginx-demo-2867147694-f6f9m -o json | grep IP
"hostIP": "10.0.13.105",
"podIP": "10.244.1.58",
$ kubectl get pod nginx-demo2-2631277934-v4ggr -o json | grep IP
"hostIP": "10.0.13.106",
"podIP": "10.244.2.14",
As you see - one Nginx example is on the first minion, and the second example is on the second minion.
The problem is - I can access nginx-demo from node 10.0.13.105 only (with Pod IP and Service IP), with curl:
curl 10.244.1.58:80
curl 10.104.34.229:80
, and nginx-demo2 from 10.0.13.106 only, not vice versa and not from master-node. Busybox is on node 10.0.13.105, so it can reach nginx-demo, but not nginx-demo2.
How do I make access to the service node-independently? Is flannel misconfigured?
Routing table on master:
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 100 0 0 enp0s3
10.0.2.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
10.0.13.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s8
10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
10.244.0.0 0.0.0.0 255.255.0.0 U 0 0 0 flannel.1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
Routing table on minion-1:
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 100 0 0 enp0s3
10.0.2.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
10.0.13.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s8
10.244.0.0 0.0.0.0 255.255.0.0 U 0 0 0 flannel.1
10.244.1.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
Maybe default gateway is a problem (NAT adapter)? Another problem - from Busybox container I try to do services DNS resolving and it doesn't work too:
$ kubectl run -i --tty busybox --image=busybox --generator="run-pod/v1"
If you don't see a command prompt, try pressing enter.
/ #
/ # nslookup nginx-demo
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'nginx-demo'
/ #
/ # nslookup nginx-demo.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'nginx-demo.default.svc.cluster.local'
Decreasing Guest OS security was done:
setenforce 0
systemctl stop firewalld
Feel free to ask more info if you need.
Addional info
kube-dns logs:
$ kubectl -n kube-system logs kube-dns-2425271678-kgb7k kubedns
I0919 07:48:45.000397 1 dns.go:48] version: 1.14.3-4-gee838f6
I0919 07:48:45.114060 1 server.go:70] Using configuration read from directory: /kube-dns-config with period 10s
I0919 07:48:45.114129 1 server.go:113] FLAG: --alsologtostderr="false"
I0919 07:48:45.114144 1 server.go:113] FLAG: --config-dir="/kube-dns-config"
I0919 07:48:45.114155 1 server.go:113] FLAG: --config-map=""
I0919 07:48:45.114162 1 server.go:113] FLAG: --config-map-namespace="kube-system"
I0919 07:48:45.114169 1 server.go:113] FLAG: --config-period="10s"
I0919 07:48:45.114179 1 server.go:113] FLAG: --dns-bind-address="0.0.0.0"
I0919 07:48:45.114186 1 server.go:113] FLAG: --dns-port="10053"
I0919 07:48:45.114196 1 server.go:113] FLAG: --domain="cluster.local."
I0919 07:48:45.114206 1 server.go:113] FLAG: --federations=""
I0919 07:48:45.114215 1 server.go:113] FLAG: --healthz-port="8081"
I0919 07:48:45.114223 1 server.go:113] FLAG: --initial-sync-timeout="1m0s"
I0919 07:48:45.114230 1 server.go:113] FLAG: --kube-master-url=""
I0919 07:48:45.114238 1 server.go:113] FLAG: --kubecfg-file=""
I0919 07:48:45.114245 1 server.go:113] FLAG: --log-backtrace-at=":0"
I0919 07:48:45.114256 1 server.go:113] FLAG: --log-dir=""
I0919 07:48:45.114264 1 server.go:113] FLAG: --log-flush-frequency="5s"
I0919 07:48:45.114271 1 server.go:113] FLAG: --logtostderr="true"
I0919 07:48:45.114278 1 server.go:113] FLAG: --nameservers=""
I0919 07:48:45.114285 1 server.go:113] FLAG: --stderrthreshold="2"
I0919 07:48:45.114292 1 server.go:113] FLAG: --v="2"
I0919 07:48:45.114299 1 server.go:113] FLAG: --version="false"
I0919 07:48:45.114310 1 server.go:113] FLAG: --vmodule=""
I0919 07:48:45.116894 1 server.go:176] Starting SkyDNS server (0.0.0.0:10053)
I0919 07:48:45.117296 1 server.go:198] Skydns metrics enabled (/metrics:10055)
I0919 07:48:45.117329 1 dns.go:147] Starting endpointsController
I0919 07:48:45.117336 1 dns.go:150] Starting serviceController
I0919 07:48:45.117702 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0919 07:48:45.117716 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0919 07:48:45.620177 1 dns.go:171] Initialized services and endpoints from apiserver
I0919 07:48:45.620217 1 server.go:129] Setting up Healthz Handler (/readiness)
I0919 07:48:45.620229 1 server.go:134] Setting up cache handler (/cache)
I0919 07:48:45.620238 1 server.go:120] Status HTTP port 8081
$ kubectl -n kube-system logs kube-dns-2425271678-kgb7k dnsmasq
I0919 07:48:48.466499 1 main.go:76] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000}
I0919 07:48:48.478353 1 nanny.go:86] Starting dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053]
I0919 07:48:48.697877 1 nanny.go:111]
W0919 07:48:48.697903 1 nanny.go:112] Got EOF from stdout
I0919 07:48:48.697925 1 nanny.go:108] dnsmasq[10]: started, version 2.76 cachesize 1000
I0919 07:48:48.697937 1 nanny.go:108] dnsmasq[10]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
I0919 07:48:48.697943 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
I0919 07:48:48.697947 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0919 07:48:48.697950 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0919 07:48:48.697955 1 nanny.go:108] dnsmasq[10]: reading /etc/resolv.conf
I0919 07:48:48.697959 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
I0919 07:48:48.697962 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0919 07:48:48.697965 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0919 07:48:48.697968 1 nanny.go:108] dnsmasq[10]: using nameserver 85.254.193.137#53
I0919 07:48:48.697971 1 nanny.go:108] dnsmasq[10]: using nameserver 92.240.64.23#53
I0919 07:48:48.697975 1 nanny.go:108] dnsmasq[10]: read /etc/hosts - 7 addresses
$ kubectl -n kube-system logs kube-dns-2425271678-kgb7k sidecar
ERROR: logging before flag.Parse: I0919 07:48:49.990468 1 main.go:48] Version v1.14.3-4-gee838f6
ERROR: logging before flag.Parse: I0919 07:48:49.994335 1 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns})
ERROR: logging before flag.Parse: I0919 07:48:49.994369 1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
ERROR: logging before flag.Parse: I0919 07:48:49.994435 1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
kube-flannel logs from one pod. but is similar to the others:
$ kubectl -n kube-system logs kube-flannel-ds-674mx kube-flannel
I0919 08:07:41.577954 1 main.go:446] Determining IP address of default interface
I0919 08:07:41.579363 1 main.go:459] Using interface with name enp0s3 and address 10.0.2.15
I0919 08:07:41.579408 1 main.go:476] Defaulting external address to interface address (10.0.2.15)
I0919 08:07:41.600985 1 kube.go:130] Waiting 10m0s for node controller to sync
I0919 08:07:41.601032 1 kube.go:283] Starting kube subnet manager
I0919 08:07:42.601553 1 kube.go:137] Node controller sync successful
I0919 08:07:42.601959 1 main.go:226] Created subnet manager: Kubernetes Subnet Manager - minion-1
I0919 08:07:42.601966 1 main.go:229] Installing signal handlers
I0919 08:07:42.602036 1 main.go:330] Found network config - Backend type: vxlan
I0919 08:07:42.606970 1 ipmasq.go:51] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
I0919 08:07:42.608380 1 ipmasq.go:51] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I0919 08:07:42.609579 1 ipmasq.go:51] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.1.0/24 -j RETURN
I0919 08:07:42.611257 1 ipmasq.go:51] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
I0919 08:07:42.612595 1 main.go:279] Wrote subnet file to /run/flannel/subnet.env
I0919 08:07:42.612606 1 main.go:284] Finished starting backend.
I0919 08:07:42.612638 1 vxlan_network.go:56] Watching for L3 misses
I0919 08:07:42.612651 1 vxlan_network.go:64] Watching for new subnet leases
$ kubectl -n kube-system logs kube-flannel-ds-674mx install-cni
+ cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf
+ true
+ sleep 3600
+ true
+ sleep 3600
I've added some more services and exposed with type NodePort, this is what I get when scanning ports from host machine:
# nmap 10.0.13.104 -p1-50000
Starting Nmap 7.60 ( https://nmap.org ) at 2017-09-19 12:20 EEST
Nmap scan report for 10.0.13.104
Host is up (0.0014s latency).
Not shown: 49992 closed ports
PORT STATE SERVICE
22/tcp open ssh
6443/tcp open sun-sr-https
10250/tcp open unknown
10255/tcp open unknown
10256/tcp open unknown
30029/tcp filtered unknown
31844/tcp filtered unknown
32619/tcp filtered unknown
MAC Address: 08:00:27:90:26:1C (Oracle VirtualBox virtual NIC)
Nmap done: 1 IP address (1 host up) scanned in 1.96 seconds
# nmap 10.0.13.105 -p1-50000
Starting Nmap 7.60 ( https://nmap.org ) at 2017-09-19 12:20 EEST
Nmap scan report for 10.0.13.105
Host is up (0.00040s latency).
Not shown: 49993 closed ports
PORT STATE SERVICE
22/tcp open ssh
10250/tcp open unknown
10255/tcp open unknown
10256/tcp open unknown
30029/tcp open unknown
31844/tcp open unknown
32619/tcp filtered unknown
MAC Address: 08:00:27:F8:E3:71 (Oracle VirtualBox virtual NIC)
Nmap done: 1 IP address (1 host up) scanned in 1.87 seconds
# nmap 10.0.13.106 -p1-50000
Starting Nmap 7.60 ( https://nmap.org ) at 2017-09-19 12:21 EEST
Nmap scan report for 10.0.13.106
Host is up (0.00059s latency).
Not shown: 49993 closed ports
PORT STATE SERVICE
22/tcp open ssh
10250/tcp open unknown
10255/tcp open unknown
10256/tcp open unknown
30029/tcp filtered unknown
31844/tcp filtered unknown
32619/tcp open unknown
MAC Address: 08:00:27:D9:33:32 (Oracle VirtualBox virtual NIC)
Nmap done: 1 IP address (1 host up) scanned in 1.92 seconds
If we take the latest service on port 32619 - it exists everywhere, but is Open only on related node, on the others it's filtered.
tcpdump info on Minion-1
Connection from Host to Minion-1 with curl 10.0.13.105:30572:
# tcpdump -ni enp0s8 tcp or icmp and not port 22 and not host 10.0.13.104
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s8, link-type EN10MB (Ethernet), capture size 262144 bytes
13:11:39.043874 IP 10.0.13.1.41132 > 10.0.13.105.30572: Flags [S], seq 657506957, win 29200, options [mss 1460,sackOK,TS val 504213496 ecr 0,nop,wscale 7], length 0
13:11:39.045218 IP 10.0.13.105 > 10.0.13.1: ICMP time exceeded in-transit, length 68
on flannel.1 interface:
# tcpdump -ni flannel.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
13:11:49.499148 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
13:11:49.499074 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
13:11:49.499239 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
13:11:49.499074 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
13:11:49.499247 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
.. ICMP time exceeded in-transit error and SYN packets only, so no connection between pods networks, because curl 10.0.13.106:30572 works.
Minion-1 interfaces
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:35:72:ab brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 77769sec preferred_lft 77769sec
inet6 fe80::772d:2128:6aaa:2355/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:f8:e3:71 brd ff:ff:ff:ff:ff:ff
inet 10.0.13.105/24 brd 10.0.13.255 scope global dynamic enp0s8
valid_lft 1089sec preferred_lft 1089sec
inet6 fe80::1fe0:dba7:110d:d673/64 scope link
valid_lft forever preferred_lft forever
inet6 fe80::f04f:5413:2d27:ab55/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:59:53:d7:fd brd ff:ff:ff:ff:ff:ff
inet 10.244.1.2/24 scope global docker0
valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether fa:d3:3e:3e:77:19 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::f8d3:3eff:fe3e:7719/64 scope link
valid_lft forever preferred_lft forever
6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000
link/ether 0a:58:0a:f4:01:01 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.1/24 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::c4f9:96ff:fed8:8cb6/64 scope link
valid_lft forever preferred_lft forever
13: veth5e2971fe#if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
link/ether 1e:70:5d:6c:55:33 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::1c70:5dff:fe6c:5533/64 scope link
valid_lft forever preferred_lft forever
14: veth8f004069#if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
link/ether ca:39:96:59:e6:63 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::c839:96ff:fe59:e663/64 scope link
valid_lft forever preferred_lft forever
15: veth5742dc0d#if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
link/ether c2:48:fa:41:5d:67 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::c048:faff:fe41:5d67/64 scope link
valid_lft forever preferred_lft forever
It works either by disabling the firewall or by running below command.
I found this open bug in my search. Looks like this is related to docker >=1.13 and flannel
refer: https://github.com/coreos/flannel/issues/799
I am not good at the network.
we are in the same situation with you, we set up four virtual machines and one is for master, else are worker nodes. I tried to use nslookup some service using in some container in the pod, but it failed to lookup, stuck on getting response from kubernetes dns.
I realize that the dns configuration or the network component is not right, thus look into the log of the canal(we use this CNI to establish the kubernete network), and find that it is initialized with the default interface which seems used by NAT but not the host-only one as below. We then rectify it, and it works now.
https://raw.githubusercontent.com/projectcalico/canal/master/k8s-install/1.7/canal.yaml
# The interface used by canal for host <-> host communication.
# If left blank, then the interface is chosen using the node's
# default route.
canal_iface: ""
Not sure which CNI you use, but hope this could help you to check.

Why can I ping the Ip of a different Network Interface of my server?

I have my local Machine (10.0.0.2/16) directly connected to the eth4 network interface of my server.
The connection works as expected and I can traceroute the ip of eth4, namely 10.0.0.1.
However, I can also traceroute the ip 10.1.0.23 of the other interface (eth5), even though it is on the wrong subnet!
In the following you see the settings of my local machine and my server.
On my local Machine (Arch Linux)
Output of ip addr:
....
2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 3c:97:0e:8a:a1:5a brd ff:ff:ff:ff:ff:ff
inet 10.0.0.2/16 brd 10.0.255.255 scope global enp0s25
valid_lft forever preferred_lft forever
inet6 fe80::7a0b:adb3:2eef:a3a8/64 scope link
valid_lft forever preferred_lft forever
....
Traceroutes
% sudo traceroute -I 10.0.0.1
traceroute to 10.0.0.1 (10.0.0.1), 30 hops max, 60 byte packets
1 10.0.0.1 (10.0.0.1) 0.184 ms 0.170 ms 0.163 ms
% sudo traceroute -I 10.1.0.23
traceroute to 10.1.0.23 (10.1.0.23), 30 hops max, 60 byte packets
1 10.1.0.23 (10.1.0.23) 0.240 ms 0.169 ms 0.166 ms
On Server (Debian)
My /etc/network/interfaces.
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
#iface eth5 inet dhcp
auto eth5
allow-hotplug eth5
iface eth5 inet static
address 10.1.0.23
netmask 255.255.0.0
gateway 10.1.0.1
## Automatically load eth4 interface at boot
auto eth4
allow-hotplug eth4
# Configure network interface at eth4
iface eth4 inet static
address 10.0.0.1
netmask 255.255.0.0
gateway 10.0.0.1
Output of ip addr:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
...
6: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:08:a2:0a:e8:86 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/16 brd 10.0.255.255 scope global eth4
valid_lft forever preferred_lft forever
inet6 fe80::208:a2ff:fe0a:e886/64 scope link
valid_lft forever preferred_lft forever
7: eth5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 00:08:a2:0a:e8:87 brd ff:ff:ff:ff:ff:ff
inet 10.1.0.23/16 brd 10.1.255.255 scope global eth5
valid_lft forever preferred_lft forever
Output of ip route:
default via 10.1.0.1 dev eth5
10.0.0.0/16 dev eth4 proto kernel scope link src 10.0.0.1
10.1.0.0/16 dev eth5 proto kernel scope link src 10.1.0.23
Why wouldn't you expect this behavior. As you can see from your Debian server's routing tables, it knows how to route packets to your arch linux machine, so it can respond if it wants to.
I can see two likely questions you might be having:
Why does it choose to respond?
You haven't given us your firewall rules, or told us whether your server has ip_forwarding enabled. Even without IP forwarding enabled, Linux will see a locally received packet for any of its local addresses as an INPUT packet (in terms of iptables and access control decisions), not a forwarded packet. So it will respond even if forwarding is disabled.
If you don't want this behavior you could add an iptables rule to the INPUT chain to drop the packet being received on the server.
Why is there only one hop in the traceroute
You might expect that in order to respond the packet would need to traverse (be forwarded) and so you would get two hops in your traceroute one for eth4 and one for eth5. However, as mentioned above, any locally received ppacket will be treated as input if it matches one of the local IPs. Your arch linux box presumably uses the Debian server as its default route. So, it sends a packet with the Debian server's MAC address, hoping the Debian server will forward it. That makes it a locally received packet at the ethernet level on the Debian serevr. The server then cehcks teh IP address, finds it is local, doesn't care it's for another ethernet and locally receives it at the IP layer.
If you don't want that behavior, fix in firewall rules.

Network unreachable inside docker container without --net=host parameter

Problem: there is no internet connection in the docker container.
Symptoms: ping 8.8.8.8 doesn't work. Wireshark from host system gives back:
19 10.866212113 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=0/0, ttl=64
20 11.867231972 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=1/256, ttl=64
21 12.868331353 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=2/512, ttl=64
22 13.869400083 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=3/768, ttl=64
But! If container was started with --net=host internet would work perfectly.
What I've tried so far:
altering DNS
adding --ip-masq=true to /etc/default/docker (with restart off)
enabling everything related to masquerade / ip_forward
altering default route
everything suggested here
Host config:
$ sudo route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.4.2.1 0.0.0.0 UG 0 0 0 eno1.3001
default 10.3.2.1 0.0.0.0 UG 100 0 0 eno2
10.3.2.0 * 255.255.254.0 U 100 0 0 eno2
10.4.2.0 * 255.255.254.0 U 0 0 0 eno1.3001
nerv8.i 10.3.2.1 255.255.255.255 UGH 100 0 0 eno2
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
sudo iptables -L, cat /etc/network/interfaces, ifconfig, iptables -t nat -L -nv
Everything is fine, forwarding is also enabled:
$ sudo sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
This is the not full answer you are looking for. But I would like to give some explanation on why the internet is working
If container was started with --net=host internet would work
perfectly.
Docker by default supports three networks. In this mode(HOST) container will share the host’s network stack and all interfaces from the host will be available to the container. The container’s host name will match the hostname on the host system
# docker run -it --net=host ubuntu:14.04 /bin/bash
root#labadmin-VirtualBox:/# hostname
labadmin-VirtualBox
Even the IP configuration is same as the host system's IP configuration
root#labadmin-VirtualBox:/# ip addr | grep -A 2 eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:b5:82:2f brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
3: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
root#labadmin-VirtualBox:/# exit
exit
HOST SYSTEM IP CONFIGURATION
# ip addr | grep -A 2 eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:b5:82:2f brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
3: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
Refer this for more information about docker networking.
Can you run "sudo ifconfig" and see if the range of IPs for your internet connection (typically wlan0) is colliding with the range for docker0 interface 172.17.0.0 ?
I had this issue with my office network (while it was working fine at home) that it ran on 172.17.0.X and Docker tried to pick exactly that range.
This might be of help: http://jpetazzo.github.io/2013/10/16/configure-docker-bridge-network/
I ended up creating my own bridge network for Docker.
Check that net.ipv4.conf.all.forwarding (not net.ipv4.ip_forward) is set to 1, if not, turn it on:
$ sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 0
$ sysctl net.ipv4.conf.all.forwarding=1
$ sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 1

Not able to connect to Vagrant private network from host on VPN (CISCO AnyConnect)

On VPN connection ( to another location of my office ), my vagrant box is not reachable via browser. Its working fine in my office location.
here is vagrant reload:
==> default: Attempting graceful shutdown of VM...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: hostonly
==> default: Forwarding ports...
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
default: The guest additions on this VM do not match the installed version of
default: VirtualBox! In most cases this is fine, but in rare cases it can
default: prevent things such as shared folders from working properly. If you see
default: shared folder errors, please make sure the guest additions within the
default: virtual machine match the version of VirtualBox you have installed on
default: your host and reload your VM.
default:
default: Guest Additions Version: 4.3.10
default: VirtualBox Version: 5.0
==> default: Configuring and enabling network interfaces...
==> default: Mounting shared folders...
default: /vagrant => /Users/sachinkushwaha/Workspace/vagrant-quikr
default: /home/axle => /Users/sachinkushwaha/Workspace/quikraxledashboard
default: /home/data => /Users/sachinkushwaha/Workspace/quikr_prod/QuikrBaseCode
default: /home/vhosts => /Users/sachinkushwaha/Workspace/vhosts
==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> default: flag to force provisioning. Provisioners marked to run always will still run.
I tried to connected many times .
Ip addr show on vagrant:
vagrant#vagrant-ubuntu-trusty-64:~$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:3e:96:5b brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe3e:965b/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:d7:25:82 brd ff:ff:ff:ff:ff:ff
inet 192.168.33.10/16 brd 192.168.255.255 scope global eth1
I wanted to access the web server on my machine.
I also tried NAT port forwarding :
Vagrant.configure("2") do |config|
config.vm.network "forwarded_port", guest: 80, host: 8080,
auto_correct: true
end
It doesn't work for me .
This is a workaround--not a fix. After you power on your laptop/workstation, but before starting Cisco AnyConnect, start your vm (i.e., vagrant up). Make sure you can connect to an app in the vm via the browser. Then start AnyConnect.
As long as you start your vm before AnyConnect, you should be able to "vagrant up" and "vagrant [whatever]" the vm as often as needed without rebooting. You'll need to repeat that process every time you power on your laptop/workstation. At least that works for us. Good luck!
I'm a bit confused about the network setup and what you are trying to achieve. If the vagrant Guest is on your local machine, you can access it by simply typing http://localhost:8080 in your browser and the VPN shouldn't really matter.
If the vagrant Guest is on another machine on another network, which you are VPN to traverse, then as long as the VPN connection on your local machine is up you should be able to access it by appending :8080 to the IP of the box. From the code you posted, that could be either http://10.0.2.15:8080 or http://192.168.33.10:8080.
If I have misunderstood the question, please comment with additional information!

Unable to connect to Vagrant private network from host

I have a vagrant virtual box up and running. So far I have been unable to connect to the web server. here is the start up:
[jesse#Athens VVV-1.1]$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: hostonly
==> default: Forwarding ports...
default: 22 => 2222 (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection timeout. Retrying...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
default: The guest additions on this VM do not match the installed version of
default: VirtualBox! In most cases this is fine, but in rare cases it can
default: prevent things such as shared folders from working properly. If you see
default: shared folder errors, please make sure the guest additions within the
default: virtual machine match the version of VirtualBox you have installed on
default: your host and reload your VM.
default:
default: Guest Additions Version: 4.2.0
default: VirtualBox Version: 4.3
==> default: Setting hostname...
==> default: Configuring and enabling network interfaces...
==> default: Mounting shared folders...
default: /vagrant => /home/jesse/vagrant/vvvStable/VVV-1.1
default: /srv/www => /home/jesse/vagrant/vvvStable/VVV-1.1/www
default: /srv/config => /home/jesse/vagrant/vvvStable/VVV-1.1/config
default: /srv/database => /home/jesse/vagrant/vvvStable/VVV-1.1/database
default: /var/lib/mysql => /home/jesse/vagrant/vvvStable/VVV-1.1/database/data
==> default: VM already provisioned. Run `vagrant provision` or use `--provision` to force it
==> default: Checking for host entries
on my host console, ip addr show yields:
4: vboxnet0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff
5: vboxnet1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 0a:00:27:00:00:01 brd ff:ff:ff:ff:ff:ff
on the guest it yields:
vagrant#vvv:~$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:12:96:98 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
inet6 fe80::a00:27ff:fe12:9698/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:2c:d4:3e brd ff:ff:ff:ff:ff:ff
inet 192.168.50.4/24 brd 192.168.50.255 scope global eth1
For now, all I want to do is access the web server on the virtual machine, whatever way works. I have tried a variety of things, just shooting in the dark. I would be happy to provide any specific info. Any help or suggestions would be greatly appreciated
Based on the output provided, the box has 2 network interfaces, 1 is the default NAT and the other private - ask you said.
The reason why you are not able to access the web site hosted within the VM thru the private interface: it could be that host eth0 or wlan0 IP address is not in the same network as the private interface -> 192.168.50.4/24 and there is no route.
To access the the site hosted by the web server within the guest, you have the following options:
1. NAT port forwarding
Forward the web port, e.g. 80 to host's 8080 (you can't use 80 because it is a privileged port on *NIX). Add the following
Vagrant.configure("2") do |config|
config.vm.network "forwarded_port", guest: 80, host: 8080,
auto_correct: true
end
NOTE: auto_correct will resolve port conflicts if the port on host is already in use.
DO a vagrant reload and you'll be able to access the site via http://localhost:8080/
2. Public Network (VirtualBox Bridged networking)
Add a public network interface
Vagrant.configure("2") do |config|
config.vm.network "public_network"
end
Get the IP of VM after it is up and running, port forwarding does NOT apply to bridged networking. So you'll be accessing the site by using http://IP_ADDR, if within the VM it binds to 80, otherwise specify the port.
One more possibility just for future reference.
Normally when you create VMs using private networking, Vagrant (Virtualbox? not sure) creates corresponding entries in the host's routing table. You can see these using
netstat -rn
Somehow my host had gotten into a state where creating the VMs did not result in new routes appearing in the routing table, with the corresponding inability to connect. Again you can see the routes not appearing using the command above.
Creating the route manually allowed me to reach the VMs. For example:
sudo route -nv add -net 10.0.4 -interface vboxnet
(Substitute the appropriate network and interface.) But I didn't want to have to do that.
Based on this question, I tried restarting my host and Vagrant started automatically creating the routing table entries again.
Not sure exactly what the issue was, but hopefully this helps somebody.
Your interface is down
I had the same issue. It was my vboxnet0 interface who was down. Within the listing of ip addr you have <BROADCAST,MULTICAST> for your interface but it should be <BROADCAST,MULTICAST,UP,LOWER_UP>.
That's mean you interface is down.
You can confirm with sudo ifconfig. The interface will not be shown but if you add -a you will see it : sudo ifconfig -a.
how to bring it up
So to bring it up you can do :
sudo ifconfig vbox
OR
sudo ip link set vboxnet0 up
Both works.
Alternatively, you could use manual port forwarding via SSH (SSH tunneling):
ssh -L 80:127.0.0.1:80 vagrant#127.0.0.1 -p 2222
That binds host port 80 to VM port 80 via your SSH session to the VM.
I ended up getting the private network to work as well by deleting it within Virtual Box. When I recreated it again with vagrant up, the ip config became:
vboxnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 192.168.50.1/24 brd 192.168.50.255 scope global vboxnet0
valid_lft forever preferred_lft forever
I had a similar issue on my Mac. VirtualBox uses host only for private networks. To use as an internal network I had to add this to the private network configuration:
"virtualbox__intnet: true"
This may not apply exactly, but "private network" in the title brought me here and others may benefit that are trying to run multiple guest boxes on Mac OS X:
I use "private_network" and don't do any port forwarding. I.e. I access my VMs by hosts like "project1.local", "project2.local".
So, I was surprised when I tried to launch a second box (a scotch/box ubuntu for LAMP) and it refused to launch with an error (excerpt):
"...The forwarded port to 2222 is already in use on the host machine..."
The error message's proposed solution doesn't work. I.e. add this to your Vagrantfile:
config.vm.network :forwarded_port, guest: 22, host: 1234
#Where 1234 would be a different port.
I am not sure why it happens because I've run multiples before (but not scotch/box). The problem is that even if you use private_network, Vagrant uses port forwarding for SSH.
The solution is to set ports SPECIFICALLY FOR SSH by adding this to your Vagrant files:
# Specify SSH config explicitly with unique host port for each box
config.vm.network :forwarded_port,
guest: 22,
host: 1234,
id: "ssh",
auto_correct: true
Note: auto_correct may make non-unique port #s work, but I haven't tested that.
Now, you can run multiple VMs at the same time using private networking.
(Thanks to Aaron Aaron and his posting here: https://groups.google.com/forum/#!topic/vagrant-up/HwqFegoCXOc)
Was having the same issue with Arch (2017-01-01). Had to install net-tools: sudo pacman -S net-tools
Virtual Box 5.1.12r112440, Vagrant 1.9.1.
You have set a private network in for your vagrant machine
If that ip is not visible then ssh to your vagrant machine and fire this command
sudo /etc/init.d/networking restart
Check to stop your firewall and iptables too

Resources