Octavia: Trying to delete immutable loadbalancer - openstack

I have a loadbalancer (see status below) that I want to delete. I already deleted the instances in its pool. Full disclosure: This is on a Devstack which I rebooted, and where I recreated the lb-mgmt-network routing manually. I may have overlooked a detail after the reboot. The loadbalancer worked before the reboot.
The first step to delete the loadbalancer is to delete its pool members. This fails as follows:
$ alias olb='openstack loadbalancer'
$ olb member delete website-pool 08f55..
Load Balancer 1ff... is immutable and cannot be updated. (HTTP 409)
What can I do to make it mutable?
Below, see the loadbalancer's status after recreating the o-hm0 route and restarting the amphora. Its provisioning status is ERROR, but according to the API, this should enable me to delete it:
$ olb status show kubelb
{
"loadbalancer": {
"id": "1ff7682b-3989-444d-a1a8-6c91aac69c45",
"name": "kubelb",
"operating_status": "ONLINE",
"provisioning_status": "ERROR",
"listeners": [
{
"id": "d3c3eb7f-345f-4ded-a7f8-7d97e3af0fd4",
"name": "weblistener",
"operating_status": "ONLINE",
"provisioning_status": "ACTIVE",
"pools": [
{
"id": "9b0875e0-7d16-4ebc-9e8d-d1b90d4264a6",
"name": "website-pool",
"provisioning_status": "ACTIVE",
"operating_status": "ONLINE",
"members": [
{
"id": "08f55bba-260a-4b83-ad6d-f9d6b44f0e2c",
"name": "",
"operating_status": "NO_MONITOR",
"provisioning_status": "ACTIVE",
"address": "172.16.0.21",
"protocol_port": 80
},
{
"id": "f7665e90-dad0-480e-8ef4-65e0a042b9fa",
"name": "",
"operating_status": "NO_MONITOR",
"provisioning_status": "ACTIVE",
"address": "172.16.0.22",
"protocol_port": 80
}
]
}
]
}
]
}
}

When you have a load balancer in ERROR state you have two options:
Delete the load balancer using the cascade delete option (--cascade on the cli).
Use the failover API to tell Octavia to repair the load balancer once your cloud is fixed.
In Octavia, operating status is a measured/observed status. If they don't go ONLINE it is likely that there is a network configuration issue with the lb-mgmt-net and the health heartbeat messages (UDP 5555) are not making it back to the health manager controller.
That said, devstack is not setup to work after a reboot. Specifically neutron and the network interfaces will be in an improper state. As you have found, you can manually reconfigure those and usually get things working again.

If I understand documentation and source code right, a loadbalancer in provisioning status ERROR can be deleted but not modified. Unfortunately, it can only be deleted after its pools and listeners have been deleted, which would modify the loadbalancer. Looks like a chicken and egg problem to me. I "solved" this by recreating the cloud from scratch. I guess I could also have cleaned up the database.
An analysis of the stack.sh log file revealed that a few additional steps were needed to make the Devstack cloud reboot-proof. To make Octavia ready:
Create /var/run/octavia, owned by the stack user
Ensure o-hm0 is up
Give o-hm0 the correct MAC and IP addresses, both found in the details of Neutron port octavia-health-manager-standalone-listen-port
add netfilter rules for traffic coming from o-hm0
At this point, I feel I can reboot Devstack and still have functioning load balancers. Strangely, all load balancers' operating_status (as well as their listeners' and pools' operating_status) is OFFLINE. However, that doesn't prevent them from working. I have not found out how to make that ONLINE.
In case anybody is interested, below is the script I use after rebooting Devstack. In addition, I also changed the Netplan configuration so that br-ex gets the server's IP address (further below).
restore-devstack script:
$ cat restore-devstack
source ~/devstack/openrc admin admin
if losetup -a | grep -q /opt/stack/data/stack-volumes
then echo loop devices are already set up
else
sudo losetup -f --show --direct-io=on /opt/stack/data/stack-volumes-default-backing-file
sudo losetup -f --show --direct-io=on /opt/stack/data/stack-volumes-lvmdriver-1-backing-file
echo restarting Cinder Volume service
sudo systemctl restart devstack#c-vol
fi
sudo lvs
openstack volume service list
echo
echo recreating /var/run/octavia
sudo mkdir /var/run/octavia
sudo chown stack /var/run/octavia
echo
echo setting up the o-hm0 interface
if ip l show o-hm0 | grep -q 'state DOWN'
then sudo ip l set o-hm0 up
else echo o-hm0 interface is not DOWN
fi
HEALTH_IP=$(openstack port show octavia-health-manager-standalone-listen-port -c fixed_ips -f yaml | grep ip_address | cut -d' ' -f3)
echo health monitor IP is $HEALTH_IP
if ip a show dev o-hm0 | grep -q $HEALTH_IP
then echo o-hm0 interface has IP address
else sudo ip a add ${HEALTH_IP}/24 dev o-hm0
fi
HEALTH_MAC=$(openstack port show octavia-health-manager-standalone-listen-port -c mac_address -f value)
echo health monitor MAC is $HEALTH_MAC
sudo ip link set dev o-hm0 address $HEALTH_MAC
echo o-hm0 MAC address set to $HEALTH_MAC
echo route to loadbalancer network:
ip r show 192.168.0.0/24
echo
echo fix netfilter for Octavia
sudo iptables -A INPUT -i o-hm0 -p udp -m udp --dport 20514 -j ACCEPT
sudo iptables -A INPUT -i o-hm0 -p udp -m udp --dport 10514 -j ACCEPT
sudo iptables -A INPUT -i o-hm0 -p udp -m udp --dport 5555 -j ACCEPT
echo fix netfilter for Magnum
sudo iptables -A INPUT -d 192.168.1.200/32 -p tcp -m tcp --dport 443 -j ACCEPT
sudo iptables -A INPUT -d 192.168.1.200/32 -p tcp -m tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -d 192.168.1.200/32 -p tcp -m tcp --dport 9511 -j ACCEPT
Netplan config:
$ cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
ethernets:
enp1s0:
dhcp4: no
br-ex:
addresses: [192.168.1.200/24]
nameservers: { addresses: [192.168.1.16,1.1.1.1] }
gateway4: 192.168.1.1
version: 2

Related

Suricata dont drop packets

I have a server with Suricata (169.69.1.11) installed and a specific rule:
drop ICMP any any -> 169.69.1.11 any (msg: "ping dropped";sid:10001;)
In other VM I execute:
ping 169.69.1.11 -c 5
so at this point, everything is bad because the pings reach, and nothing is registered on fast.log so I execute on the Suricata machine
sudo suricata -i enp0s8
and I ping another time with the same command ( 5 pings )
In my other machine every seems okay, the 5 pings seems they reach, but I look at the logs on Suricata /var/log/suricata/fast.log it drops that line
03/25/2022-11:11:05.231735 [wDrop] [**] [1:10001:0] ping dropped [**] [Classification: (null)] [Priority: 3] {ICMP} 169.69.1.10:8 -> 169.69.1.11:0
Why the pings are hitting and don't get blocked?
Why do I ping 5 times but only 1 time is logged?
My first problem is I didn't have Suricata IPS, first delete ur iptables rules with
sudo iptables -F
sudo iptables -I INPUT -j NFQUEUE
sudo iptables -I OUTPUT -j NFQUEUE
sudo iptables -I FORWARD -j NFQUEUE
and execute the Suricata with -D to let as bg
sudo Suricata -q 0 -D

IPtables NAT/Masquerade to allow OpenStack instances to access sites external to the laptop they're running on

I have OpenStack running on a Fedora laptop. Openstack hates network interfaces that are managed by NetworkManager, so I set up a dummy interface that's used as the port for the br-ex interface that OpenStack allows instances to communicate through to the outside world. I can connect to the floating ips fine, but they can't get past the subnet that br-ex has. I'd like them to be to reach addresses external to the laptop. I suspect some iptables nat/masquerading magic is required. Does anyone have any ideas?
For Centos7 OpenStack with 3 nodes you should use networking:
just install net-tools and disable NetworkManager:
yum install net-tools -y;
systemctl disable NetworkManager.service
systemctl stop NetworkManager.service
chkconfig network on
Also You need IP tables no firewalld.
yum install -y iptables-services
systemctl enable iptables.service
systemctl disable firewalld.service
systemctl stop firewalld.service
For controller node have one NIC
For Network and compute nodes have 2 NICs
Edit interfaces on all nodes:
for Network eth0: ip:X.X.X.X (external) eth1:10.0.0.1 - no gateway
for Controller node eth0: ip:10.0.0.2 - gateway 10.0.0.1
for compute node eth0: ip:10.0.0.3 - gateway 10.0.0.1
Set up iptables like:
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A POSTROUTING -o eth0-j MASQUERADE
iptables -A FORWARD -i eth1 -o eth0-j ACCEPT
iptables -A FORWARD -i eth0-o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
service iptables save
Also enable forwarding. In file: /etc/sysctkl.conf add line:
net.ipv4.ip_forward = 1
And execute command:
sysctl –p
Should work.

iptables command to bridge openstack virtual network

I successfully installed openstack on spare server using the ubuntu single-node installer script. The openstack status page on the underlying ubuntu instance is green across the board. From the host ubuntu instance I can ping / ssh to all of the various openstack instances which have been started on the virtual network.
I now want to access the horizon dashboard from my pc on the local network. (I can't access it from the host ubuntu machine since it is a server install & thus has no desktop to run a web browser on) My local network is 192.168.1.xxx, with the ubuntu server having a static ip of 192.168.1.200. Horizon was installed on an instance with ip 10.0.4.77.
Based on the following blog post, (http://serenity-networks.com/installing-ubuntu-openstack-on-a-single-machine-instead-of-7/) it looks like I need to make an iptables change to the host ubuntu instance to bridge between the two networks. The suggested command from the blog post above is:
$ sudo iptables -t nat -A PREROUTING -p tcp -d 192.168.1.250 --dport 8000 -j DNAT --to-destination 10.0.6.241:443
Which if I modify for my network / install would be:
$ sudo iptables -t nat -A PREROUTING -p tcp -d 192.168.1.200 --dport 8000 -j DNAT --to-destination 10.0.4.77:443
However, I am suspicious this is not the preferred way to do this. First, because the --dport 8000 seems wrong, and second because I was under the impression that neutron should be used to create the necessary bridge.
Any help would be appreciated...
$ sudo iptables -t nat -A PREROUTING -p tcp -d 192.168.1.200 --dport 8000 -j DNAT --to-destination 10.0.4.77:443
This command has nothing to do with neutron. It just made your ubuntu server a router connecting your local network and openstack private network, so that you can access horizon through ip of local network.
--dport 8000 is not fixed, you can change to any unoccupied port. It only influence the horizon address you enter in address bar.

couldn't access internet resource even if successfully connect to pptp vpn

I hire host which locate Tokyo as my vps server, and I follow this article to install pptp server
article about install pptp from digital ocean
and my vps ip >>> 107.191.60.187
in addtion, I install ufw and allow pptpd's port by this way
ufw allow 1723
ufw disable && ufw enable
but in fact I can't access internet resource even if I could successfully connect my pptpd program on vps.
I really don't know how to solve it : (
could anybody help me ..
thanks a lot.
just take commit for this question
before I make a mistake that set wrong iptabes rules, and then I resolve it by below method, it works.
#1. first I inspect status and remove ipesec server, it conflicts.
sudo service ipsec status
sudo apt remove ipsec xl2tpd
#2. then I look for port 1723 that judge whether it recive data package
sudo tcpdump -i eth0 port 1723
#3. finally I change rules by using iptabes clearly
sudo iptables -t nat -nL
sudo iptables -t nat -A POSTROUTING -j MASQUERADE
#4. and save it
sudo iptables -t nat -S
sudo iptables-save -t nat
#5. modify content in file before.rules, confirm it as a daemon
sudo vi /etc/ufw/before.rules
# just like below this
*nat
:PREROUTING ACCEPT [73:5676]
:INPUT ACCEPT [6:1415]
:OUTPUT ACCEPT [7:431]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
-A POSTROUTING -j MASQUERADE
COMMIT
that's all..

Iptables to modify source ip. Nothing in POSTROUTING chain log

Here is a little picture
Asterisk eth1 10.254.254.2/28------------- Many Good Guys
eth1:1 192.168.83.5/32----------- 192.168.59.3 Bad Guy Peer
I have an Asterisk which is connected with several peers. Some of them are connected through
eth1 and one the badest through alias eth1:1.
Then my asterisk send invite to peers it goes with the eth1 source. So for the bad guy I need to change my source ip to 192.168.83.5 As far as I know it can be done with iptables.
So I tried the rule
iptables -t nat -A POSTROUTING -s 10.254.254.2 -d 192.168.59.3 -j SNAT
--to 192.168.83.5
nothing happens.
When I log I can see send packets in INPUT and OUTPUT chains with :
iptables -t filter -A OUTPUT -o eth1 -s 10.254.254.2 -d 192.168.59.3
-j LOG --log-level 7 --log-prefix "OUTPUT"
iptables -t filter -A INPUT-i eth1 -s 192.168.59.3 -d 192.168.83.5 -j
LOG --log-level 7 --log-prefix "OUTPUT"
but I don’t see any in POSTROUTING chain with:
iptables -t nat -A POSTROUTING -s 10.254.254.2 -d 192.168.59.3 -j LOG
--log-level 7 --log-prefix "POSTROUTING"
That is I have nothing to SNAT(((
At the same time the traffic from other peers is visible in POSTROUTING log. What can it be?
Any thoughts, wishes, kicks would be very appreciated!
The solution has been found!!
I didn' t find a way to make my iptables work. But know i know how to do it without iptables at all.
So generally speaking my task was to modify|mask|replace my source ip of eth1 with eth1:1 ip.
By the way i use CentOS 5.8
And there is a command:
ip route add
which gives you ability to point scr address unlike the route command.
so
ip route add 192.168.59.3/32 via 10.254.254.1 dev eth1 src
192.168.83.5
is doing just what i need.
Thank you for attention!
That will not work. Reason is simple, asterisk will set in packet source addres=address of eth1.
You can start enother asterisk same host(with other config dir). I am sorry, i not know other simple variants.

Resources