octavia: what is the loadbalancer IP assigned to? - openstack

I am trying to understand how Octavia is put together. I created a loadbalancer on a vlan network. It was assigned an address of 10.40.0.7. When I do openstack loadbalancer list, I see a vip_address of 10.40.0.7 which is not assigned to any amphorae.
I want to understand where the loadbalancer address is mapped. It is not a host. I can't ssh to that address. Perhaps it is the amphora driver but what exactly is that? I can't see that address find it in any namespace. I can't see it assigned to any bridge. What is it assigned to?
Thanks
Ranga

It is not a host.
It is a host! An amphora is just a nova server -- the same thing you get when you run openstack server create. The difference is that the amphora is owned by the service project, so you'll only see it if you were to run (as admin) openstack server list --all-projects. For example:
$ openstack --os-cloud as_me loadbalancer list
+--------------------------------------+---------+----------------------------------+-------------+---------------------+----------+
| id | name | project_id | vip_address | provisioning_status | provider |
+--------------------------------------+---------+----------------------------------+-------------+---------------------+----------+
| 64a6a56d-beeb-4ee2-b495-1cdc26ffd399 | test_lb | 0ac1e30189da48b387cf3c2f5582b2a3 | 10.254.0.6 | ACTIVE | octavia |
+--------------------------------------+---------+----------------------------------+-------------+---------------------+----------+
$ openstack --os-cloud as_admin server list --all-projects | grep amphora
| f6cd75fe-8513-4aae-bee9-af9362525703 | amphora-50dddb41-decf-4b3b-bb7a-f35a751d74af | ACTIVE | lb-mgmt-net=172.24.0.16; test_lb_net=10.254.0.11; test_net1=10.0.1.5; test_net0=10.0.0.4 | octavia-amphora-13.0-20181107.1.x86_64 | octavia_65 |
If you look at that server, you'll see it has several ip addresses:
The one you assigned to it when created the loadbalancer, and
A management network address
Addresses on any subnets to which it is attached
You can ssh into the amphora using the management network address. You should be able to reach it from your controllers. You'll need the appropriate ssh key; where to find that probably depends a lot on how you installed things. I'm using tripleo, and it looks as if the install uses ~/.ssh/id_rsa from the stack user for the amphora ssh key.
[controller ~]$ ssh -i amphora_private_key cloud-user#172.24.0.7
Last login: Thu Nov 15 22:01:16 2018 from 172.24.0.6
[cloud-user#amphora-7d48e10b-5ba4-42c9-bcd5-941d224b2a46 ~]$
Update
The loadbalancer VIP is assigned to an interface inside a namespace on
the amphora. Given the above configuration, I see:
[root#amphora-50dddb41-decf-4b3b-bb7a-f35a751d74af ~]# ip netns
amphora-haproxy (id: 0)
[root#amphora-50dddb41-decf-4b3b-bb7a-f35a751d74af ~]# ip netns exec amphora-haproxy ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:07:d2:26 brd ff:ff:ff:ff:ff:ff
inet 10.254.0.11/24 brd 10.254.0.255 scope global eth1
valid_lft forever preferred_lft forever
inet 10.254.0.6/24 brd 10.254.0.255 scope global secondary eth1:0
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe07:d226/64 scope link
valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:21:9a:d1 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.4/24 brd 10.0.0.255 scope global eth2
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe21:9ad1/64 scope link
valid_lft forever preferred_lft forever
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:2a:63:58 brd ff:ff:ff:ff:ff:ff
inet 10.0.1.5/24 brd 10.0.1.255 scope global eth3
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe2a:6358/64 scope link
valid_lft forever preferred_lft forever

Related

QEMU bridge attachment issue

I'm trying to create default NAT and bridge interface in my QEMU machine. Naturally, I created bridge interface in separate file /etc/network/intefaces.d/virbr2. Here is virbr2 file configuration:
# Configuring network virtual interface
# to be a virt switch
auto virbr2
iface virbr2 inet static
bridge_ports enp1s0
address 192.168.1.3
netmask 255.255.255.0
broadcast 192.168.1.255
up ip route add 192.168.1.2 via 192.168.1.1 via enp1s0
brdige_stp off
bridge_waitport 0
bridge_fd 0
My general interface configuration file is pretty simple
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# auto launch enp1s0 interface after the host os is booted
# since we want create a bridge interface, let's attach
# it to bridge interface br0
auto enp1s0
iface enp1s0 inet manual
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
Thus, interface virbr2 was created with proper IP address
$ ip a | grep -A 5 virbr2
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr2 state UP group default qlen 1000
link/ether e8:d8:d1:51:15:c2 brd ff:ff:ff:ff:ff:ff
3: wlp0s20f3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 04:ea:56:59:cf:a4 brd ff:ff:ff:ff:ff:ff
inet 192.168.31.69/24 brd 192.168.31.255 scope global dynamic noprefixroute wlp0s20f3
valid_lft 41947sec preferred_lft 41947sec
--
4: virbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d6:71:34:e1:fa:9b brd ff:ff:ff:ff:ff:ff
inet 192.168.1.3/24 brd 192.168.1.255 scope global virbr2
valid_lft forever preferred_lft forever
inet6 fdf7:2246:8eb:0:d471:34ff:fee1:fa9b/64 scope global dynamic mngtmpaddr
valid_lft forever preferred_lft forever
inet6 fe80::d471:34ff:fee1:fa9b/64 scope link
valid_lft forever preferred_lft forever
$ brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242daa58f02 no
virbr0 8000.525400d87725 yes
virbr2 8000.d67134e1fa9b no enp1s0
As bridge interface created, I'm trying to launch my VM with next command:
qemu-system-x86_64 \
-m 4096 \
-smp 4 \
-drive 'file=debian-opkg-server.qcow2,if=virtio,format=qcow2' \
-net 'user,hostfwd=tcp::2200-:22' \
-net nic \
-netdev 'tap,id=br1,ifname=virbr2,script=no,downscript=no' \
-device 'virtio-net-pci,netdev=br1'
After the script launch i get next error message
Unable to init server: Could not connect: Connection refused
qemu-system-x86_64: could not configure /dev/net/tun (virbr2): Invalid argument
How it's possible that argument is invalid? Interface name is correct, so I have no idea about the reason it's not working.

How do I stop DHCP to request address for a static interface?

I have a Raspberry PI 4 running Ubuntu 21.10 with a static ip-address on eth0. Despite that, I keep getting a secondary 'dynamic' DHCP address on on it.
netplan
network:
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: no
addresses:
- 192.168.0.10/24
routes:
- to: default
via: 192.168.0.1
nameservers:
search: [lan]
addresses: [192.168.0.12]
ip addr show
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether dc:a6:32:da:df:55 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.10/23 brd 192.168.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.0.225/23 brd 192.168.1.255 scope global secondary dynamic eth0
valid_lft 68727sec preferred_lft 68727sec
inet6 fe80::dea6:32ff:feda:df55/64 scope link
valid_lft forever preferred_lft forever
Even if I delete that interface, it keeps coming back after a few minutes. I have another PI with the "same" configuration and it doesn't have this problem.
I also have the /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg per instructions.
Have you tried using the word false instead of no on your dhcp4 entry to netplan?

VM & host can't ping eachother over bridge

I have a host machine with Debian 10 & QEMU-KVM. I installed packages and rebooted:
sudo apt install qemu-kvm virt-manager
sudo reboot
So now I want to create a bridge that will enable my virtual servers to (a) connect to network and (b) to be seen to a host machine and other computers on the network.
I read dozen of tutorials on how to do this and failed miserably every time. I had some sucess setting up bridge with (a) package iproute2 and (b) package virt-manager (ran as super user).
Trying as a root:
By folowing archwiki I set up my bridge using these commands:
sudo ip link add virtual_bridge type bridge
sudo ip link set dev virtual_bridge up
I then reset the ethernet card and connect it to the bridge as it's slave:
sudo ip link set dev enx24f5a2f17b27 down
sudo ip addr flush dev enx24f5a2f17b27
sudo ip link set dev enx24f5a2f17b27 up
sudo ip link set dev enx24f5a2f17b27 master virtual_bridge
And then I open the GUI application:
sudo virt-manager
I right click the QEMU/KVM session (qemu:///system) and I choose connect:
When session is connected I start creating a new virtual machine. During it's creation I come to a window asking to choose type of virtual network. There are two options. First one has suboptions while second one enables manual input of the device:
Host device enx24f5a2f17b27: macvtap
Bridge
VEPA
Private
Passthrough
Specify shared device name
I tried choosing suboptions offered by the first option, but when selected they, issue a warning:
In most configurations macvtap does not work for host to guest network communication
This is not an option for me because my virtual servers will need two-way communication. This is why I choose the second option and I manualy specify my bridge virtual_bridge:
Then I start the virtual machine which can browse the internet and also can the host machine. Both are assigned the IP in the same network. But when I try to ping them ping doesn't work in any direction. Host, can't ping virtual machine and vice versa.
I can't explain this, because archwiki states that bridge should be transparent like a switch and devices should therefore be able to ping eachother:
A bridge is a piece of software used to unite two or more network
segments. A bridge behaves like a virtual network switch, working
transparently (the other machines do not need to know or care about
its existence).
If I check internet settings on a host:
ziga#ziga-laptop:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: wlp3s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether c4:85:08:3c:1a:59 brd ff:ff:ff:ff:ff:ff
3: enx24f5a2f17b27: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virtual_bridge state UP group default qlen 1000
link/ether 24:f5:a2:f1:7b:27 brd ff:ff:ff:ff:ff:ff
inet 192.168.64.100/24 brd 192.168.64.255 scope global enx24f5a2f17b27
valid_lft forever preferred_lft forever
inet6 fe80::26f5:a2ff:fef1:7b27/64 scope link
valid_lft forever preferred_lft forever
32: virtual_bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 24:f5:a2:f1:7b:27 brd ff:ff:ff:ff:ff:ff
inet6 fe80::26f5:a2ff:fef1:7b27/64 scope link
valid_lft forever preferred_lft forever
34: vnet0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virtual_bridge state UNKNOWN group default qlen 1000
link/ether fe:54:00:c4:3e:62 brd ff:ff:ff:ff:ff:ff
inet 169.254.82.75/16 brd 169.254.255.255 scope global vnet0
valid_lft forever preferred_lft forever
inet6 fe80::2c93:eff:fea5:c52b/64 scope link
valid_lft forever preferred_lft forever
From the above, I can confirm that my ethernet interface enx24f5a2f17b27 and vnet0 (which was automaticaly created by virtual machine) are both slaves to virtual_bridge *(note the keywords master virtual_bridge)*.
If I am honest I was expecting GUI application to also create TAP device as well but it only created vnet0... Is this actually a TAP device?
How can I make connection two-way?
Trying as a normal user (without bridge):
I deleted virtual_bridge and virtual_tap so that everything was back to normal.
ziga#ziga-laptop:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: wlp3s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether c4:85:08:3c:1a:59 brd ff:ff:ff:ff:ff:ff
3: enx24f5a2f17b27: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 24:f5:a2:f1:7b:27 brd ff:ff:ff:ff:ff:ff
inet 192.168.64.100/24 brd 192.168.64.255 scope global enx24f5a2f17b27
valid_lft forever preferred_lft forever
inet6 fe80::26f5:a2ff:fef1:7b27/64 scope link
valid_lft forever preferred_lft forever
I noticed that if I start virt-manager with sudo and use qcow2 image that image will become owned by root and it will become part of group root. This was part of my problem why I avoided using virt-manager as a normal user. So I fixed this and started virt-manager as a normal user.
I created the identical virtual machine but when a network window pops up it had different (!) options:
Userspace networking
Specify shared device name
I was unable to specify my interface enx24f5a2f17b27 manualy with the second option so I chose a userspace networking.
Then I started the virtual machine which can browse the internet and also can the host machine. Both are assigned the IP which is totaly different. When I try to ping them ping doesn't work in any direction. Host, can't ping virtual machine and vice versa.
Trying as a normal user (with bridge)
So now I first set up my my bridge precisely like I did in my first attempt as a sudo user:
sudo ip link add virtual_bridge type bridge
sudo ip link set dev virtual_bridge up
sudo ip link set dev enx24f5a2f17b27 down
sudo ip addr flush dev enx24f5a2f17b27
sudo ip link set dev enx24f5a2f17b27 up
sudo ip link set dev enx24f5a2f17b27 master virtual_bridge
so that I have:
ziga#ziga-laptop:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: wlp3s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether c4:85:08:3c:1a:59 brd ff:ff:ff:ff:ff:ff
3: enx24f5a2f17b27: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virtual_bridge state UP group default qlen 1000
link/ether 24:f5:a2:f1:7b:27 brd ff:ff:ff:ff:ff:ff
inet 192.168.64.100/24 brd 192.168.64.255 scope global enx24f5a2f17b27
valid_lft forever preferred_lft forever
inet6 fe80::26f5:a2ff:fef1:7b27/64 scope link
valid_lft forever preferred_lft forever
11: virtual_bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 24:f5:a2:f1:7b:27 brd ff:ff:ff:ff:ff:ff
inet6 fe80::805f:cfff:feb6:ec91/64 scope link
valid_lft forever preferred_lft forever
I started the virt-manager as a normal user and created the identical virtual machine. When a network window pops up it has same options than before:
Userspace networking
Specify shared device name
I was unable to specify my bridge virtual_bridge manualy with the second option because Qemu reports an internal eror:

Host IP is not visible by LXC guests. Open vSwitch bridge

In such way I have configured OVS bridge for LXC containers LXC with Open vSwitch
It is bridge configuration:
# ovs-vsctl show
1b236728-4637-42a5-8b81-53d4c93a6803
Bridge "switch0"
Port vethNSCEGY
Interface vethNSCEGY
Port "switch0"
Interface "switch0"
type: internal
Port "vethD6TFEB"
Interface "vethD6TFEB"
ovs_version: "2.3.2"
switch0 is interface on host and has IP 192.168.100.1/24
vethNSCEGY and vethD6TFEB are interfaces for LXC guests.
Eventually first LXC guest with IP 192.168.100.10/24 can ping second LXC guest 192.168.100.11/24 but cant ping host IP 192.168.100.1/24
Is it OK for OVS? Or do I need smth to enable?
PS. IPs on my interfaces:
# ip a
...
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default
link/ether 52:9d:e1:60:1d:56 brd ff:ff:ff:ff:ff:ff
5: switch0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default
link/ether 16:63:eb:47:13:41 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.1/24 scope global switch0
valid_lft forever preferred_lft forever
35: vethNSCEGY: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000
link/ether fe:d1:06:81:69:ed brd ff:ff:ff:ff:ff:ff
inet6 fe80::fcd1:6ff:fe81:69ed/64 scope link
valid_lft forever preferred_lft forever
37: vethD6TFEB: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000
link/ether fe:ca:e9:16:dd:81 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fcca:e9ff:fe16:dd81/64 scope link
valid_lft forever preferred_lft foreve
It was my bad again. switch0 was down. So turning up of interface helped me:
# ip link set dev switch0 up

No DNS inside lxc container [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have an LXC debian container which run on my archlinux host. I tried to setup a bridge (lxc-bridge-nat) using wlan0 but i can't ping the outside world from my container except if i ping using the ip instead of the domain name.
I can ping the container from the host and the host from the container.
Here some informations:
Host: ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether d4:be:d9:70:bd:e5 brd ff:ff:ff:ff:ff:ff
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether c4:85:08:b4:5c:e9 brd ff:ff:ff:ff:ff:ff
inet 192.168.42.121/24 brd 192.168.42.255 scope global wlan0
valid_lft forever preferred_lft forever
inet6 fe80::c685:8ff:feb4:5ce9/64 scope link
valid_lft forever preferred_lft forever
valid_lft forever preferred_lft forever
4: lxc-bridge-nat: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether fe:b3:b7:a2:e1:31 brd ff:ff:ff:ff:ff:ff
inet 192.168.50.1/24 brd 192.168.50.255 scope global lxc-bridge-nat
valid_lft forever preferred_lft forever
inet6 fe80::b0c8:d2ff:fe73:aa50/64 scope link
valid_lft forever preferred_lft forever
host: ip route
default via 192.168.42.1 dev wlan0 proto static
192.168.42.0/24 dev wlan0 proto kernel scope link src 192.168.42.121 metric 9
192.168.50.0/24 dev lxc-bridge-nat proto kernel scope link src 192.168.50.1
Container: ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
10: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:ff:aa:00:00:01 brd ff:ff:ff:ff:ff:ff
inet 192.168.50.2/24 brd 192.168.50.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::2ff:aaff:fe00:1/64 scope link
valid_lft forever preferred_lft forever
Container: ip route
default via 192.168.50.1 dev eth0
192.168.50.0/24 dev eth0 proto kernel scope link src 192.168.50.2
Container: /etc/resolv.conf
nameserver 212.27.40.240
nameserver 212.27.40.241
Hostnames not resolving points immediately to the DNS servers, and posting the content of resolv.conf was useful here; they were the only part of this setup outside of your immediate control.
As you've found, simply pinging a remote server doesn't always help - running nslookup against them showed that they were the problem. (As a counterpoint, due to the way ping itself works a lack of response from a ping doesn't mean the server is down - pings are trivial to block at firewall level.)
To work around to your DNS issue, you can make use of other DNS servers, such as those hosted by Google. Simply alter your resolv.conf to:
nameserver 8.8.8.8
nameserver 8.8.4.4
Adding 8.8.8.8 in /etc/network/interfaces is not a good idea. You should leave the interfaces file unchanged (restore the old settings) and modify only /etc/resolv.conf file.
You have set the container IP to the same IP of one of the Google's DNS servers.
Sure, the ping will work, but when you try to resolve a DNS hostname your container will contact itself and generate an error.

Resources