I'm trying to run a default basic deb10 VM on my deb10 dedicated server but I can't reach the VM on the default network. I can't make it acquire any IP address nor reach it in any way. I tried many things on many threads found online, without success.
The easiest solution I found was to enable port forwarding (because of the NAT mode of default conf) and start over but it didnt worked either.
sudo sysctl -w net.ipv4.ip_forward=1
I'll try to give as many informations as I can.
Script
#!/bin/bash
vname="deb"
virt-builder debian-10 \
--size 15G \
--format qcow2 -o "disk/$vname.qcow2" \
--hostname "$vname.local" \
--ssh-inject "root:string:ssh-rsa somesuperrsapubkey user#host" \
--root-password disabled \
--timezone "Europe/Paris" \
--update
virt-install \
--import \
--name "$vname" \
--ram 1024 \
--vcpu 1 \
--disk "disk/$vname.qcow2" \
--os-variant debian10 \
--network default \
--noautoconsole
Nothing very fancy in this, I'm trying to stay as basic as possible.
IP interfaces
ansible#host:/kvm$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether x:x:x:x:x:x brd ff:ff:ff:ff:ff:ff
inet x.x.x.x/24 brd x.x.x.255 scope global dynamic eno1
valid_lft 57059sec preferred_lft 57059sec
inet6 x::x:x:x:x/64 scope link
valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether x:x:x:x:x:x brd ff:ff:ff:ff:ff:ff
42: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:9b:bf:4c brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
43: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:9b:bf:4c brd ff:ff:ff:ff:ff:ff
44: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN group default qlen 1000
link/ether fe:54:00:9a:81:24 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe9a:8124/64 scope link
valid_lft forever preferred_lft forever
Firewall
ansible#host:/kvm$ sudo iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53
0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:67
0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:67
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * virbr0 0.0.0.0/0 192.168.122.0/24 ctstate RELATED,ESTABLISHED
0 0 ACCEPT all -- virbr0 * 192.168.122.0/24 0.0.0.0/0
0 0 ACCEPT all -- virbr0 virbr0 0.0.0.0/0 0.0.0.0/0
0 0 REJECT all -- * virbr0 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
0 0 REJECT all -- virbr0 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT udp -- * virbr0 0.0.0.0/0 0.0.0.0/0 udp dpt:68
Virsh manipulations
ansible#host:/kvm$ sudo virsh
virsh # net-dumpxml default
<network connections='1'>
<name>default</name>
<uuid>75e2d7eb-389c-406b-a63e-7fe5e9f188f5</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:9b:bf:4c'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
virsh # domifaddr deb
Name MAC address Protocol Address
-------------------------------------------------------------------------------
virsh # domiflist deb
Interface Type Source Model MAC
-------------------------------------------------------------
vnet0 network default virtio 52:54:00:9a:81:24
virsh # list
Id Name State
----------------------
19 deb running
virsh # net-list
Name State Autostart Persistent
--------------------------------------------
default active no yes
Is there anybody who can help me find my mistake ?
Thanks all
It happens that the VM network interface is not activated at first boot :
ifup enp1s0
I found this workaround for now but I'd like to have a better solution.
VM image build
virt-builder debian-10 \
--size 15G \
--format qcow2 -o "disk/deb.qcow2" \
--hostname "deb.local" \
--timezone "Europe/Paris" \
--upload 00-init:/etc/network/interfaces.d/00-init \
--update
00-init file
user#host:~/kvm$ cat 00-init
allow-hotplug enp1s0
iface enp1s0 inet dhcp
Then the VM does get an IP address from host DHCP
Related
I'm trying to create default NAT and bridge interface in my QEMU machine. Naturally, I created bridge interface in separate file /etc/network/intefaces.d/virbr2. Here is virbr2 file configuration:
# Configuring network virtual interface
# to be a virt switch
auto virbr2
iface virbr2 inet static
bridge_ports enp1s0
address 192.168.1.3
netmask 255.255.255.0
broadcast 192.168.1.255
up ip route add 192.168.1.2 via 192.168.1.1 via enp1s0
brdige_stp off
bridge_waitport 0
bridge_fd 0
My general interface configuration file is pretty simple
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# auto launch enp1s0 interface after the host os is booted
# since we want create a bridge interface, let's attach
# it to bridge interface br0
auto enp1s0
iface enp1s0 inet manual
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
Thus, interface virbr2 was created with proper IP address
$ ip a | grep -A 5 virbr2
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr2 state UP group default qlen 1000
link/ether e8:d8:d1:51:15:c2 brd ff:ff:ff:ff:ff:ff
3: wlp0s20f3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 04:ea:56:59:cf:a4 brd ff:ff:ff:ff:ff:ff
inet 192.168.31.69/24 brd 192.168.31.255 scope global dynamic noprefixroute wlp0s20f3
valid_lft 41947sec preferred_lft 41947sec
--
4: virbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d6:71:34:e1:fa:9b brd ff:ff:ff:ff:ff:ff
inet 192.168.1.3/24 brd 192.168.1.255 scope global virbr2
valid_lft forever preferred_lft forever
inet6 fdf7:2246:8eb:0:d471:34ff:fee1:fa9b/64 scope global dynamic mngtmpaddr
valid_lft forever preferred_lft forever
inet6 fe80::d471:34ff:fee1:fa9b/64 scope link
valid_lft forever preferred_lft forever
$ brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242daa58f02 no
virbr0 8000.525400d87725 yes
virbr2 8000.d67134e1fa9b no enp1s0
As bridge interface created, I'm trying to launch my VM with next command:
qemu-system-x86_64 \
-m 4096 \
-smp 4 \
-drive 'file=debian-opkg-server.qcow2,if=virtio,format=qcow2' \
-net 'user,hostfwd=tcp::2200-:22' \
-net nic \
-netdev 'tap,id=br1,ifname=virbr2,script=no,downscript=no' \
-device 'virtio-net-pci,netdev=br1'
After the script launch i get next error message
Unable to init server: Could not connect: Connection refused
qemu-system-x86_64: could not configure /dev/net/tun (virbr2): Invalid argument
How it's possible that argument is invalid? Interface name is correct, so I have no idea about the reason it's not working.
I'm trying to serve 2 different frontends on same 443 port but with different IP's. However nginx -t fails with nginx: [emerg] bind() to 10.10.1.1:443 failed (99: Cannot assign requested address). Here's my conf's:
Conf 1:
server {
listen 10.10.0.1:443 ssl http2;
}
Conf 2:
server {
listen 10.10.1.1:443 ssl http2;
}
I have no 443 port open by any other process - netstat -tulpn | grep :443 gives nothing. I assume that second bind fails after binding first block. For example, if I change second block to listen 133.10.1.1:443 I get no errors.
There is no default configs in my sites-enabled folder.
Please help sort it out =)
Upd:
# cat /etc/hosts
127.0.1.1 serv serv
127.0.0.1 localhost
#Custom
10.10.0.1 main.site
10.10.1.1 test.site
Upd:
# ip addr sh
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet *<external IP>*/20 brd *<external IP>* scope global eth0
valid_lft forever preferred_lft forever
inet 10.24.0.5/16 brd 10.24.255.255 scope global eth0
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet 10.150.0.2/16 brd 10.150.255.255 scope global eth1
valid_lft forever preferred_lft forever
4: int0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
link/none
inet 10.10.0.1/16 scope global int0
valid_lft forever preferred_lft forever
10.10.0.0 - tunnel network, server conf 1 works perfectly on 10.10.0.1 without conf 2 enabled.
Upd:
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 <external IP> 0.0.0.0 UG 0 0 0 eth0
10.10.0.0 0.0.0.0 255.255.0.0 U 0 0 0 int0
10.24.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
10.150.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1
<external IP> 0.0.0.0 255.255.240.0 U 0 0 0 eth0
This configuration brings the error, however, adding separate 10.10.1.1/24 address to the int0 (opposing just 10.10.0.1\16) solved the issue, like so:
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 <external IP> 0.0.0.0 UG 0 0 0 eth0
10.10.0.0 0.0.0.0 255.255.255.0 U 0 0 0 int0
10.10.1.0 0.0.0.0 255.255.255.0 U 0 0 0 int0
10.24.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
10.150.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1
<external IP> 0.0.0.0 255.255.240.0 U 0 0 0 eth0
Everything works fine now.
I've recently setup proxmox VE 6.2
I've two network adapters, one is a LAN network and other is a WAN network (USB RNDIS)
I've setup pfSense as a VM, as in the netgate docs I've created two bridges for WAN and LAN with those two physical NICs.
Everything is going fine, pfSense works as expected all lan clients can access the internet flawlessly through the pfSense VM.
But the issue is, proxmox can't make HTTP requests, I know it's weird. It can successfully access the internet, like I can make pings to 1.1.1.1 or any public available IP.
I tried like this
curl -vvv google.com
this is the ouput I got and this is where it's getting stuck, all HTTP connection acts the same way
* Trying 216.58.197.46...
* TCP_NODELAY set
* Expire in 149896 ms for 3 (transfer 0x55772a88ddc0)
* Expire in 200 ms for 4 (transfer 0x55772a88ddc0)
* Connected to google.com (216.58.197.46) port 80 (#0)
> GET / HTTP/1.1
> Host: google.com
> User-Agent: curl/7.64.0
> Accept: */*
And it's stuck there and times out after a while. Can't make apt update either. It seems to get connected but can't receive the response back.
This is the ping response
ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=75.4 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=74.7 ms
no issues there.
This is one hell of a weird issue, I've never faced before.
ip route list
default via 192.168.0.1 dev vmbr0 onlink
192.168.0.0/24 dev vmbr0 proto kernel scope link src 192.168.0.114
192.168.1.0/24 dev vmbr2 proto kernel scope link src 192.168.1.102
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp14s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
link/ether 3c:07:71:55:54:6e brd ff:ff:ff:ff:ff:ff
3: enx0c5b8f279a64: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
link/ether 0c:5b:8f:27:9a:64 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 3c:07:71:55:54:6e brd ff:ff:ff:ff:ff:ff
inet 192.168.0.114/24 brd 192.168.0.255 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::3e07:71ff:fe55:546e/64 scope link
valid_lft forever preferred_lft forever
5: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 0c:5b:8f:27:9a:64 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.102/24 brd 192.168.1.255 scope global dynamic vmbr2
valid_lft 84813sec preferred_lft 84813sec
inet6 fe80::e5b:8fff:fe27:9a64/64 scope link
valid_lft forever preferred_lft forever
6: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
link/ether 5a:1e:56:2a:0d:fe brd ff:ff:ff:ff:ff:ff
7: tap100i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
link/ether a2:fe:d5:1d:43:8f brd ff:ff:ff:ff:ff:ff
iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Proxmox IP - 192.168.0.114 (Static configured)
pfSense Gateway IP - 192.168.0.1
WAN (Internal IP) - 192.168.1.101
vmbr0 - LAN bridge
vmbr2 - WAN bridge
you should probably Disable Hardware Checksum Offloading.
this worked for me on a virtualized hardware. (HVM).
see this post:
https://askubuntu.com/questions/597894/can-ping-but-cannot-wget-on-host-with-bridge-interface
Problem: there is no internet connection in the docker container.
Symptoms: ping 8.8.8.8 doesn't work. Wireshark from host system gives back:
19 10.866212113 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=0/0, ttl=64
20 11.867231972 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=1/256, ttl=64
21 12.868331353 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=2/512, ttl=64
22 13.869400083 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=3/768, ttl=64
But! If container was started with --net=host internet would work perfectly.
What I've tried so far:
altering DNS
adding --ip-masq=true to /etc/default/docker (with restart off)
enabling everything related to masquerade / ip_forward
altering default route
everything suggested here
Host config:
$ sudo route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.4.2.1 0.0.0.0 UG 0 0 0 eno1.3001
default 10.3.2.1 0.0.0.0 UG 100 0 0 eno2
10.3.2.0 * 255.255.254.0 U 100 0 0 eno2
10.4.2.0 * 255.255.254.0 U 0 0 0 eno1.3001
nerv8.i 10.3.2.1 255.255.255.255 UGH 100 0 0 eno2
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
sudo iptables -L, cat /etc/network/interfaces, ifconfig, iptables -t nat -L -nv
Everything is fine, forwarding is also enabled:
$ sudo sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
This is the not full answer you are looking for. But I would like to give some explanation on why the internet is working
If container was started with --net=host internet would work
perfectly.
Docker by default supports three networks. In this mode(HOST) container will share the host’s network stack and all interfaces from the host will be available to the container. The container’s host name will match the hostname on the host system
# docker run -it --net=host ubuntu:14.04 /bin/bash
root#labadmin-VirtualBox:/# hostname
labadmin-VirtualBox
Even the IP configuration is same as the host system's IP configuration
root#labadmin-VirtualBox:/# ip addr | grep -A 2 eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:b5:82:2f brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
3: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
root#labadmin-VirtualBox:/# exit
exit
HOST SYSTEM IP CONFIGURATION
# ip addr | grep -A 2 eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:b5:82:2f brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
3: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
Refer this for more information about docker networking.
Can you run "sudo ifconfig" and see if the range of IPs for your internet connection (typically wlan0) is colliding with the range for docker0 interface 172.17.0.0 ?
I had this issue with my office network (while it was working fine at home) that it ran on 172.17.0.X and Docker tried to pick exactly that range.
This might be of help: http://jpetazzo.github.io/2013/10/16/configure-docker-bridge-network/
I ended up creating my own bridge network for Docker.
Check that net.ipv4.conf.all.forwarding (not net.ipv4.ip_forward) is set to 1, if not, turn it on:
$ sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 0
$ sysctl net.ipv4.conf.all.forwarding=1
$ sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 1
For example, on the physical machine:
# ip addr
5: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 ..
inet 10.32.230.90/24 scope global eth2
valid_lft forever preferred_lft forever
inet 10.32.230.61/24 scope global secondary eth2
valid_lft forever preferred_lft forever
8: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
link/ether 02:42:65:1b:b0:25 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
"10.32.230.90" is the main IP of this machine, and "10.32.230.61" is secondary added with "ip addr add 10.32.230.61/24 dev eth2".
After creating a docker instance, with IP = 172.17.0.10, I add the following rules to connect native IP with secondary IP:
# iptables -A POSTROUTING -j MASQUERADE
# iptables -t nat -A PREROUTING -d 10.32.230.61 -j DNAT --to 172.17.0.10
# echo 1 > /proc/sys/net/ipv4/ip_forward
But it doesn't work because external PC still cannot get access to 10.32.230.61, but can get access to 10.32.230.90. What's the solution?
(From a certain PC, which IP is, for example, 10.32.230.95)
# ping 10.32.230.90
PING 10.32.230.90 (10.32.230.90) 56(84) bytes of data.
64 bytes from 10.32.230.90: icmp_seq=1 ttl=52 time=280 ms
64 bytes from 10.32.230.90: icmp_seq=2 ttl=52 time=336 ms
^C
# ping 10.32.230.61
(Timeout..)
I am sure that there is no IP confliction: 10.32.230.61 is not used by any other hosts.