I've been setting up OpenStack Victoria follow its installation documents on Ubuntu 20.04.4 LTS, after setting up the most services, I tried to create a server using openstack server create command, but it was failed to create.
According to the nova compute logs, I checked neutron logs
Binding failed for port 50065651-2336-494e-bcd2-314bc76b5542, please check neutron logs for more information.
There are two main errors in neutron logs
Port 50065651-2336-494e-bcd2-314bc76b5542 does not have an IP address assigned and there are no driver with 'connectivity' = 'l2'. The port cannot be bound.
Failed to bind port 50065651-2336-494e-bcd2-314bc76b5542 on host controller01 for vnic_type normal using segments [{'id': 'de296ba3-73da-47ed-9312-d4aa48f0b1e7', 'network_type': 'vlan', 'physical_network': 'physnet1', 'segmentation_id': 43, 'network_id': '733427cc-bc91-451c-903c-fa60a368b693'}]
Here are some main conf files
/etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
auth_strategy = keystone
dhcp_lease_duration = 86400
dns_domain = learningneutron.com
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
transport_url = rabbit://openstack:rabbit#controller01
[agent]
root_helper = "sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf"
[cors]
[database]
connection = mysql+pymysql://neutron:neutron#controller01/neutron
[ironic]
[keystone_authtoken]
www_authenticate_uri = http://controller01:5000
auth_url = http://controller01:5000
memcached_servers = controller01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[nova]
auth_url = http://controller01:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
...
/etc/neutron/plugins/ml2/ml2_conf.ini
[DEFAULT]
[ml2]
type_drivers = local,flat,vlan,vxlan
tenant_network_types = vlan,vxlan
machanism_drivers = linuxbridge,l2population,openvswitch
extension_drivers = port_security
[ml2_type_flat]
flat_networks = physnet1
[ml2_type_geneve]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vlan]
network_vlan_ranges = physnet1:40:43
[ml2_type_vxlan]
vni_ranges = 1:1000
[ovs_driver]
[securitygroup]
firewall_driver = iptables_hybrid
enable_security_group = true
enable_ipset = true
[sriov_driver]
/etc/neutron/plugins/ml2/openvswitch_agent.ini
[DEFAULT]
[agent]
vxlan_udp_port = 8472
l2_population = true
arp_responder = true
[network_log]
[ovs]
local_ip = 192.168.40.129
bridge_mappings = physnet1:br-ens38
[securitygroup]
firewall_driver = iptables_hybrid
[xenapi]
/etc/nova/nova.conf
[DEFAULT]
log_dir = /var/log/nova
lock_path = /var/lock/nova
state_path = /var/lib/nova
my_ip = 192.168.253.134
transport_url = rabbit://openstack:rabbit#controller01:5672/
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:nova#controller01/nova_api
...
[database]
connection = mysql+pymysql://nova:nova#controller01/nova
...
[glance]
api_servers = http://controller01:9292
...
[keystone_authtoken]
www_authenticate_uri = http://controller01:5000/
auth_url = http://controller01:5000/
memcached_servers = controller01:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova
...
[neutron]
url = http://controller01:9696
auth_url = http://controller01:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = MetadataSecret123
[notifications]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
...
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller01:5000/v3
username = placement
password = placement
...
I've seen one similar questions, but none of the solutions worked for my situation:
failed to bind port in openstack-neutron
I also checked service and agent status, it's seems like good to me
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:ae:8d:f7 brd ff:ff:ff:ff:ff:ff
inet 192.168.253.134/24 brd 192.168.253.255 scope global dynamic ens33
valid_lft 1066sec preferred_lft 1066sec
inet6 fe80::20c:29ff:feae:8df7/64 scope link
valid_lft forever preferred_lft forever
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:ae:8d:01 brd ff:ff:ff:ff:ff:ff
inet 192.168.40.129/24 brd 192.168.40.255 scope global dynamic ens37
valid_lft 1063sec preferred_lft 1063sec
inet6 fe80::20c:29ff:feae:8d01/64 scope link
valid_lft forever preferred_lft forever
4: ens38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000
link/ether 00:0c:29:ae:8d:0b brd ff:ff:ff:ff:ff:ff
inet6 fe80::20c:29ff:feae:8d0b/64 scope link
valid_lft forever preferred_lft forever
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 42:d9:38:7c:cd:db brd ff:ff:ff:ff:ff:ff
6: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 72:5b:5a:b7:ce:4f brd ff:ff:ff:ff:ff:ff
8: br-ens38: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:0c:29:ae:8d:0b brd ff:ff:ff:ff:ff:ff
9: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:d8:a4:c6 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
10: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:d8:a4:c6 brd ff:ff:ff:ff:ff:ff
e52483be-c1e4-4dc6-87bf-77f8cefe6674
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
datapath_type: system
Port tape077c6b7-02
tag: 4095
trunks: [4095]
Interface tape077c6b7-02
type: internal
Port tap509692c4-f9
tag: 4095
trunks: [4095]
Interface tap509692c4-f9
type: internal
Port int-br-ens38
Interface int-br-ens38
type: patch
options: {peer=phy-br-ens38}
Port br-int
Interface br-int
type: internal
Bridge br-ens38
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
datapath_type: system
Port phy-br-ens38
Interface phy-br-ens38
type: patch
options: {peer=int-br-ens38}
Port ens38
Interface ens38
Port br-ens38
Interface br-ens38
type: internal
ovs_version: "2.13.5"
- Agent Type: Open vSwitch agent
Alive: true
Availability Zone: null
Binary: neutron-openvswitch-agent
Host: controller01
ID: 61146449-41a0-4d41-992e-d24b2d378c4c
State: true
- Agent Type: DHCP agent
Alive: true
Availability Zone: nova
Binary: neutron-dhcp-agent
Host: controller01
ID: 6f0c43b8-26ed-426c-bb99-7aaf763f74e8
State: true
- Agent Type: Metadata agent
Alive: true
Availability Zone: null
Binary: neutron-metadata-agent
Host: controller01
ID: 9292d5ca-342e-4718-a52e-2d6ce2bb2afe
State: true
- Binary: nova-scheduler
Host: controller01
ID: 3
State: up
Status: enabled
Updated At: '2022-08-14T17:48:50.000000'
Zone: internal
- Binary: nova-conductor
Host: controller01
ID: 5
State: up
Status: enabled
Updated At: '2022-08-14T17:48:43.000000'
Zone: internal
- Binary: nova-compute
Host: controller01
ID: 9
State: up
Status: enabled
Updated At: '2022-08-14T17:48:44.000000'
Zone: nova
As expected the port status are down
- Fixed IP Addresses:
- ip_address: 192.168.200.2
subnet_id: b95f00f4-4862-4783-9cbb-8b20a6eac84c
ID: 509692c4-f908-4582-9ca2-1c8381c3df9f
MAC Address: fa:16:3e:3a:63:91
Name: ''
Status: DOWN
- Fixed IP Addresses:
- ip_address: 192.168.100.2
subnet_id: db8c14df-11f6-4bd6-b39a-f06a3c33ce85
ID: e077c6b7-02e4-4720-b873-9278e2e1886a
MAC Address: fa:16:3e:f5:7d:89
Name: ''
Status: DOWN
Related
I bought a new 1GB/ps server from OneProvider and installed Ubuntu 18.04.6 on it.
The upload speed from ssh or FTP is very good, but the download speed is about 100kb/s from ssh, FTP and I tried to install Nginx and download from it but it's also about 100kb/ps.
All attempts from more than 5 devices from different locations some of these tried were from another server in the same network with (wget) but all attempts did not exceed the speed of 150kb/s.
this is (ip a) output :
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether d4:ae:52:ca:0f:6e brd ff:ff:ff:ff:ff:ff
inet (serverip)/24 brd 62.210.207.255 scope global eno1
valid_lft forever preferred_lft forever
inet6 fe80::d6ae:52ff:feca:f6e/64 scope link
valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d4:ae:52:ca:0f:6f brd ff:ff:ff:ff:ff:ff
(ethtool eno1)output :
Settings for eno1:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: on
Supports Wake-on: g
Wake-on: d
Link detected: yes
(ethtool -S eno1)output :
NIC statistics:
rx_bytes: 67518474469
rx_error_bytes: 0
tx_bytes: 1939892582744
tx_error_bytes: 0
rx_ucast_packets: 457688996
rx_mcast_packets: 1105671
rx_bcast_packets: 743858
tx_ucast_packets: 1341579130
tx_mcast_packets: 12
tx_bcast_packets: 4
tx_mac_errors: 0
tx_carrier_errors: 0
rx_crc_errors: 0
rx_align_errors: 0
tx_single_collisions: 0
tx_multi_collisions: 0
tx_deferred: 0
tx_excess_collisions: 0
tx_late_collisions: 0
tx_total_collisions: 0
rx_fragments: 0
rx_jabbers: 0
rx_undersize_packets: 0
rx_oversize_packets: 0
rx_64_byte_packets: 4346996
rx_65_to_127_byte_packets: 430360977
rx_128_to_255_byte_packets: 1072678
rx_256_to_511_byte_packets: 420201
rx_512_to_1023_byte_packets: 250311
rx_1024_to_1522_byte_packets: 23087362
rx_1523_to_9022_byte_packets: 0
tx_64_byte_packets: 899130
tx_65_to_127_byte_packets: 11634758
tx_128_to_255_byte_packets: 2699608
tx_256_to_511_byte_packets: 3443633
tx_512_to_1023_byte_packets: 7211982
tx_1024_to_1522_byte_packets: 1315690035
tx_1523_to_9022_byte_packets: 0
rx_xon_frames: 0
rx_xoff_frames: 0
tx_xon_frames: 0
tx_xoff_frames: 0
rx_mac_ctrl_frames: 0
rx_filtered_packets: 113311
rx_ftq_discards: 0
rx_discards: 0
rx_fw_discards: 0
(ifconfig eno1 |grep errors) output :
RX errors 0 dropped 93 overruns 0 frame 0
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
(lshw -C network) output :
*-network:0
description: Ethernet interface
product: NetXtreme II BCM5716 Gigabit Ethernet
vendor: Broadcom Inc. and subsidiaries
physical id: 0
bus info: pci#0000:01:00.0
logical name: eno1
version: 20
serial: d4:ae:52:ca:0f:6e
size: 1Gbit/s
capacity: 1Gbit/s
width: 64 bits
clock: 33MHz
capabilities: pm vpd msi msix pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=bnx2 driverversion=2.2.6 duplex=full firmware=7.4.8 bc 7.4.0 NCSI 2.0.11 ip=(serverip) latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:16 memory:c0000000-c1ffffff
*-network:1 DISABLED
description: Ethernet interface
product: NetXtreme II BCM5716 Gigabit Ethernet
vendor: Broadcom Inc. and subsidiaries
physical id: 0.1
bus info: pci#0000:01:00.1
logical name: eno2
version: 20
serial: d4:ae:52:ca:0f:6f
capacity: 1Gbit/s
width: 64 bits
clock: 33MHz
capabilities: pm vpd msi msix pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=bnx2 driverversion=2.2.6 duplex=half firmware=7.4.8 bc 7.4.0 NCSI 2.0.11 latency=0 link=no multicast=yes port=twisted pair
resources: irq:17 memory:c2000000-c3ffffff
I am struggling with attaching OVS-DPDK ports to my VM.
I am new to openstack, OVS-DPDK and here is my current setup:
I have created a VM with ports of physnets which are SRIOV port.
I have other 2 ports which will be associated to OVS-DPDK. OVS-DPDK installed and have done below steps ( ovs-vswitchd (Open vSwitch) 2.17.0 DPDK 21.11.0)
Binding UIO driver for NIC port
dpdk-devbind.py -b vfio-pci 08:00.0
dpdk-devbind.py -b vfio-pci 08:00.1
Binding this DPDK port to OVS, called dpdkport
ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
ovs-vsctl add-port br0 dpdk-p0 -- set Interface dpdk-p0 type=dpdk ofport_request=1 options:dpdk-devargs=0000:08:00.0
ovs-vsctl add-port br0 dpdk-p1 -- set Interface dpdk-p1 type=dpdk ofport_request=2 options:dpdk-devargs=0000:08:00.1
/usr/libexec/qemu-kvm -name guest=instance-0000000c -chardev socket,id=char1,path=/var/run/dpdkvhostclient1,server -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 -object memory-backend-file,id=mem1,size=0x8000000,mem-path=/dev/hugepages,share=on -numa node,memdev=mem1 -mem-prealloc &
and
/usr/libexec/qemu-kvm -name guest=instance-0000000c -chardev socket,id=char1,path=/var/run/dpdkvhostclient2,server -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet1 -object memory-backend-file,id=mem1,size=0x8000000,mem-path=/dev/hugepages,share=on -numa node,memdev=mem1 -mem-prealloc &
Add a vhostuser port to OVS
ovs-vsctl add-port br0 dpdkvhostclient1 -- set Interface dpdkvhostclient1 type=dpdkvhostuserclient ofport_request=3 options:vhost-server-path=/var/run/dpdkvhostclient1
ovs-vsctl add-port br0 dpdkvhostclient2 -- set Interface dpdkvhostclient2 type=dpdkvhostuserclient ofport_request=4 options:vhost-server-path=/var/run/dpdkvhostclient2
Add a flow that forwarding PKT from vhostuser to dpdkport
ovs-ofctl del-flows br0
ovs-ofctl add-flow br0 in_port=1,actions=output:3
ovs-ofctl add-flow br0 in_port=2,actions=output:4
ovs-ofctl add-flow br0 in_port=3,actions=output:1
ovs-ofctl add-flow br0 in_port=4,actions=output:2
Logged into VM and I don't see any of dpdk port being shown in ipconfig -a also.
I am following https://docs.openvswitch.org/en/latest/topics/dpdk/vhost-user/#dpdk-vhost-user-client
I also tried putting in xml of my VM instance
<cpu mode='host-model' check='partial'>
<model fallback='allow'/>
<topology sockets='6' cores='1' threads='1'/>
<numa>
<cell id='0' cpus='0-5' memory='4096' unit='KiB' memAccess='shared'/>
</numa>
</cpu>
<memoryBacking>
<hugepages>
<page size='2048' unit='G'/>
</hugepages>
<locked/>
<source type='file'/>
<access mode='shared'/>
<allocation mode='immediate'/>
<discard/>
</memoryBacking>
<interface type='vhostuser'>
<mac address='0c:c4:7a:ea:4b:b2'/>
<source type='unix' path='/var/run/dpdkvhostclient1' mode='server'/>
<target dev='dpdkvhostclient1'/>
<model type='virtio'/>
<driver queues='2'>
<host mrg_rxbuf='on'/>
</driver>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</interface>
<interface type='vhostuser'>
<mac address='0c:c4:7a:ea:4b:b3'/>
<source type='unix' path='/var/run/dpdkvhostclient2' mode='server'/>
<target dev='dpdkvhostclient2'/>
<model type='virtio'/>
<driver queues='2'>
<host mrg_rxbuf='on'/>
</driver>
<address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0'/>
</interface>
Mac addresses of these dpdkuserports are some random and slots are also the one which were not present in xml. NUMA block was added in CPU section and memoryBacking was also added, rebooted instance hen but new interfaces didnt appear in VM.
dpduserport were shown DOWN as below
ovs-ofctl show br0
OFPT_FEATURES_REPLY (xid=0x2): dpid:00001cfd0870760c
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(dpdk-p0): addr:1c:fd:08:70:76:0c
config: 0
state: 0
current: 1GB-FD AUTO_NEG
speed: 1000 Mbps now, 0 Mbps max
2(dpdk-p1): addr:1c:fd:08:70:76:0d
config: 0
state: 0
current: 1GB-FD AUTO_NEG
speed: 1000 Mbps now, 0 Mbps max
3(dpdkvhostclient): addr:00:00:00:00:00:00
config: 0
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
4(dpdkvhostclient): addr:00:00:00:00:00:00
config: 0
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
LOCAL(br0): addr:1c:fd:08:70:76:0c
config: PORT_DOWN
state: LINK_DOWN
current: 10MB-FD COPPER
speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
What Am I missing ?
Here is my playbook. When I execute it, the command ip dhcp pool {{ item.name }} don't check if the name exist or not. I use the parameter "match: exact" but it doesn't work. So can I use if statement in ansible playbook or is there a way to check the pool name before execute commands.
I use when in the playbook to check if item.name is defined but it also don't work.
---
- name: "UPDATE DHCP OPTIONS FOR FNAC-FRANCHISE SWITCHES"
hosts: all
gather_facts: false
vars:
xx: ["ip dhcp pool VOICEC_DHCP","ip dhcp pool DATA","ip dhcp pool VIDEO_DHCP ","ip dhcp pool WIFI_USER"," ip dhcp pool WIFI_ADM", ]
tasks:
- name: "CHECK"
ios_command:
commands:
- show run | include ip dhcp pool
register: output
- name: DISPLAY THE COMMAND OUTPUT
debug:
var: output.stdout_lines
- name: transform output
set_fact:
pools: "{{ item | regex_replace('ip dhcp pool ', '') }}"
loop: "{{ output.stdout_lines }}"
- name: "UPDATE DHCP OPTIONS IN POOL DATA & WIFI_USER"
ios_config:
lines:
- dns-server 10.0.0.1
- netbios-name-server 10.0.0.1
- netbios-node-type h-node
parents: ip dhcp pool {{ item.name }}
match: exact
loop: "{{ pools }}"
Here is the output that I have
ok: [ITG] => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "stdout": ["ip dhcp pool VIDEO_DHCP\nip dhcp pool WIFI_ADM\nip dhcp pool VOICEC_DHCP\nip dhcp pool DATA\nip dhcp pool WIFI_USER"], "stdout_lines": [["ip dhcp pool VIDEO_DHCP", "ip dhcp pool WIFI_ADM", "ip dhcp pool VOICEC_DHCP", "ip dhcp pool DATA", "ip dhcp pool WIFI_USER"]]}
TASK [DISPLAY THE COMMAND OUTPUT] *********************************************************************************************************************************************
ok: [ITG] => {
"output.stdout_lines": [
[
"ip dhcp pool VIDEO_DHCP",
"ip dhcp pool WIFI_ADM",
"ip dhcp pool VOICEC_DHCP",
"ip dhcp pool DATA",
"ip dhcp pool WIFI_USER"
]
]
}
TASK [transform output] *******************************************************************************************************************************************************
ok: [ITG] => (item=[u'ip dhcp pool VIDEO_DHCP', u'ip dhcp pool WIFI_ADM', u'ip dhcp pool VOICEC_DHCP', u'ip dhcp pool DATA', u'ip dhcp pool WIFI_USER']) => {"ansible_facts": {"pools": ["VIDEO_DHCP", "WIFI_ADM", "VOICEC_DHCP", "DATA", "WIFI_USER"]}, "ansible_loop_var": "item", "changed": false, "item": ["ip dhcp pool VIDEO_DHCP", "ip dhcp pool WIFI_ADM", "ip dhcp pool VOICEC_DHCP", "ip dhcp pool DATA", "ip dhcp pool WIFI_USER"]}
There are switches which pool name WIFI_USER doesn't exist and I don't want to create it in the switch if the line "parents: ip dhcp pool {{ item.name }}" does not match.
if i simulate your return value:
- name: vartest
hosts: localhost
vars: #d use just to test
xx: ["ip dhcp pool VOICEC_DHCP","ip dhcp pool DATA","ip dhcp pool VIDEO_DHCP ","ip dhcp pool WIFI_USER"," ip dhcp pool WIFI_ADM", ]
tasks:
- name: trap output
ios_config:
parents: show ip dhcp pool
register: output
- name: transform output
set_fact:
pools: "{{ pools | default([]) + [item | regex_replace('ip dhcp pool ', '')] }}"
loop: "{{ output.stdout_lines[0] }}" #you adapt following the result output.stdout or something else
- name: "UPDATE DHCP OPTIONS IN POOL DATA & WIFI_USER"
ios_config:
commands:
- dns-server <ip-addr>
- netbios-name-server <ip-addr>
- netbios-node-type h-node
parents: ip dhcp pool {{ item }}
loop: "{{ pools }}"
In order to test SRv6 uSID in Linux, I compiled the new kernel 5.6.0 that in following Github:
https://github.com/netgroup/srv6-usid-linux-kernel.git
After compiled and reboot, my 2nd network adapter port(eth1) disappeared, two network adapter ports should the same type, and only eth0 was renamed to ens3, as follow:
[root#frank cisco]# uname -a
Linux frank 5.6.0+ #3 SMP Tue Jun 30 17:32:20 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
[root#frank cisco]# dmesg |grep eth
[ 2.311925] e1000 0000:00:03.0 eth0: (PCI:33MHz:32-bit) 5e:00:00:00:00:00
[ 2.314897] e1000 0000:00:03.0 eth0: Intel(R) PRO/1000 Network Connection
[ 2.770167] e1000 0000:00:04.0 eth1: (PCI:33MHz:32-bit) fa:16:3e:38:fd:91
[ 2.773194] e1000 0000:00:04.0 eth1: Intel(R) PRO/1000 Network Connection
[ 5.352825] e1000 0000:00:03.0 ens3: renamed from eth0
[root#frank cisco]#
[root#frank cisco]# lshw -class network -businfo
Bus info Device Class Description
========================================================
pci#0000:00:03.0 ens3 network 82540EM Gigabit Ethernet Controller
pci#0000:00:04.0 network 82540EM Gigabit Ethernet Controller
Follow is dmesg for two ports:
[root#frank cisco]# dmesg |grep 00:03.0
[ 0.700489] pci 0000:00:03.0: [8086:100e] type 00 class 0x020000
[ 0.702057] pci 0000:00:03.0: reg 0x10: [mem 0xfeb80000-0xfeb9ffff]
[ 0.703921] pci 0000:00:03.0: reg 0x14: [io 0xc000-0xc03f]
[ 0.707532] pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb3ffff pref]
[ 2.311925] e1000 0000:00:03.0 eth0: (PCI:33MHz:32-bit) 5e:00:00:00:00:00
[ 2.314897] e1000 0000:00:03.0 eth0: Intel(R) PRO/1000 Network Connection
[ 5.352825] e1000 0000:00:03.0 ens3: renamed from eth0
[root#frank cisco]#
[root#frank cisco]# dmesg |grep 00:04.0
[ 0.708456] pci 0000:00:04.0: [8086:100e] type 00 class 0x020000
[ 0.710057] pci 0000:00:04.0: reg 0x10: [mem 0xfeba0000-0xfebbffff]
[ 0.711846] pci 0000:00:04.0: reg 0x14: [io 0xc040-0xc07f]
[ 0.715515] pci 0000:00:04.0: reg 0x30: [mem 0xfeb40000-0xfeb7ffff pref]
[ 2.770167] e1000 0000:00:04.0 eth1: (PCI:33MHz:32-bit) fa:16:3e:38:fd:91
[ 2.773194] e1000 0000:00:04.0 eth1: Intel(R) PRO/1000 Network Connection
Follow lshw cmd
"driver=uio_pci_generic"
[root#frank v2.81]# lshw -c network
*-network:0
description: Ethernet interface
product: 82540EM Gigabit Ethernet Controller
vendor: Intel Corporation
physical id: 3
bus info: pci#0000:00:03.0
logical name: ens3
version: 03
serial: 5e:00:00:00:00:00
size: 1Gbit/s
capacity: 1Gbit/s
width: 32 bits
clock: 33MHz
capabilities: bus_master rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k8-NAPI duplex=full ip=172.16.1.140 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:10 memory:feb80000-feb9ffff ioport:c000(size=64) memory:feb00000-feb3ffff
*-network:1
description: Ethernet controller
product: 82540EM Gigabit Ethernet Controller
vendor: Intel Corporation
physical id: 4
bus info: pci#0000:00:04.0
version: 03
width: 32 bits
clock: 33MHz
capabilities: bus_master rom
configuration: driver=uio_pci_generic latency=0 <<<
resources: irq:11 memory:feba0000-febbffff ioport:c040(size=64) memory:feb40000-feb7ffff
And found the port bound by dpdk, but I didn't set any bound config...
[root#frank v2.81]# ./dpdk_setup_ports.py -s
Network devices using DPDK-compatible driver
============================================
0000:00:04.0 '82540EM Gigabit Ethernet Controller' drv=uio_pci_generic unused=e1000,igb_uio,vfio-pci <<<
Network devices using kernel driver
===================================
0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3 drv=e1000 unused=igb_uio,vfio-pci,uio_pci_generic
Other network devices
=====================
<none>
Does anyone know what is going on...and how to solve this problem...?
Thanks a lot!
Frank
After discussed with colleagues, the issue should be followed by this link:
https://www.kernel.org/doc/html/v4.12/driver-api/uio-howto.html
And as above guide, I can workaround the issue, but issue appear again after reboot...
[root#frank v2.81]# ls -l /sys/bus/pci/devices/0000:00:04.0/driver
lrwxrwxrwx. 1 root root 0 Jun 30 17:59 /sys/bus/pci/devices/0000:00:04.0/driver -> ../../../bus/pci/drivers/uio_pci_generic
[root#frank v2.81]# echo -n 0000:00:04.0 > /sys/bus/pci/drivers/uio_pci_generic/unbind
[root#frank v2.81]# echo -n 0000:00:04.0 > /sys/bus/pci/drivers/e1000/bind
[79965.358393] e1000 0000:00:04.0 eth0: (PCI:33MHz:32-bit) fa:16:3e:38:fd:91
[79965.360499] e1000 0000:00:04.0 eth0: Intel(R) PRO/1000 Network Connection
[root#frank v2.81]# ls -l /sys/bus/pci/devices/0000:00:04.0/driver
lrwxrwxrwx. 1 root root 0 Jul 1 16:12 /sys/bus/pci/devices/0000:00:04.0/driver -> ../../../bus/pci/drivers/e1000
[root#frank cisco]# ifconfig eth0 up
[ 221.792886] e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[ 221.796553] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[root#frank cisco]# lshw -c network
*-network:0
description: Ethernet interface
product: 82540EM Gigabit Ethernet Controller
vendor: Intel Corporation
physical id: 3
bus info: pci#0000:00:03.0
logical name: ens3
version: 03
serial: 5e:00:00:00:00:00
size: 1Gbit/s
capacity: 1Gbit/s
width: 32 bits
clock: 33MHz
capabilities: bus_master rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k8-NAPI duplex=full ip=172.16.1.140 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:11 memory:feb80000-feb9ffff ioport:c000(size=64) memory:feb00000-feb3ffff
*-network:1
description: Ethernet interface
product: 82540EM Gigabit Ethernet Controller
vendor: Intel Corporation
physical id: 4
bus info: pci#0000:00:04.0
logical name: eth0
version: 03
serial: fa:16:3e:38:fd:91
size: 1Gbit/s
capacity: 1Gbit/s
width: 32 bits
clock: 33MHz
capabilities: bus_master rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k8-NAPI duplex=full latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:11 memory:feba0000-febbffff ioport:c040(size=64) memory:feb40000-feb7ffff
I have successfully deployed everything in Redhat Openstack 11 with following settings. I was not able to ping the floating IP externally rather i can perform ping, ssh and other things using namespace.
I have three controllers and two hypercoverged Compute.
VLAN for RHOSP 11 Setup
172.26.11.0/24 - Provision Network ( VLAN2611 )
172.26.12.0/24 - Internal Network ( VLAN2612 )
172.26.13.0/24 - Tentant Network ( VLAN2613 )
172.26.14.0/24 - Storage Network ( VLAN2614 )
172.26.16.0/24 - Storage Managment ( VLAN2616 )
172.26.17.0/24 - Management Network ( VLAN2617 )
172.30.10.0/23 - External Network ( VLAN3010 )
Server Setup:
[stack#director ~]$ nova list
+--------------------------------------+------------------------+--------+------------+-------------+-----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------------------+--------+------------+-------------+-----------------------+
| 3e37a6ed-1b0a-49de-9aa8-5515949ad11a | overcloud-compute-0 | ACTIVE | - | Running | ctlplane=172.26.11.13 |
| 3bab2815-1df8-4b1a-ab70-fa1d00dd5889 | overcloud-compute-1 | ACTIVE | - | Running | ctlplane=172.26.11.25 |
| 531cc5ad-ceb2-40c4-9662-1a984eea1907 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=172.26.11.12 |
| 598cb725-ed9d-4e7f-b8d1-3d5ac0df86d8 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=172.26.11.23 |
| a92cbacd-301e-4201-aa74-b100eb245345 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=172.26.11.28 |
+--------------------------------------+------------------------+--------+------------+-------------+-----------------------+
Controller-0 IP's Assigned:
All other two controllers will have the same IP address configuration.
[stack#director ~]$ ssh heat-admin#172.26.11.12
Last login: Wed Feb 14 09:23:13 2018 from 172.26.11.254
[heat-admin#overcloud-controller-0 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether c8:1f:66:e1:1a:c3 brd ff:ff:ff:ff:ff:ff
inet 172.26.11.12/24 brd 172.26.11.255 scope global em1
valid_lft forever preferred_lft forever
inet 172.26.11.22/32 brd 172.26.11.255 scope global em1
valid_lft forever preferred_lft forever
inet6 fe80::ca1f:66ff:fee1:1ac3/64 scope link
valid_lft forever preferred_lft forever
3: em2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000
link/ether c8:1f:66:e1:1a:c4 brd ff:ff:ff:ff:ff:ff
inet6 fe80::ca1f:66ff:fee1:1ac4/64 scope link
valid_lft forever preferred_lft forever
4: em3: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000
link/ether c8:1f:66:e1:1a:c5 brd ff:ff:ff:ff:ff:ff
inet6 fe80::ca1f:66ff:fee1:1ac5/64 scope link
valid_lft forever preferred_lft forever
5: em4: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether c8:1f:66:e1:1a:c6 brd ff:ff:ff:ff:ff:ff
6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether c6:05:34:74:27:e0 brd ff:ff:ff:ff:ff:ff
7: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether c8:1f:66:e1:1a:c4 brd ff:ff:ff:ff:ff:ff
inet6 fe80::800e:f6ff:fe6d:245/64 scope link
valid_lft forever preferred_lft forever
8: vlan2612: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether 9a:12:3a:34:7a:7c brd ff:ff:ff:ff:ff:ff
inet 172.26.12.12/24 brd 172.26.12.255 scope global vlan2612
valid_lft forever preferred_lft forever
inet 172.26.12.18/32 brd 172.26.12.255 scope global vlan2612
valid_lft forever preferred_lft forever
inet6 fe80::9812:3aff:fe34:7a7c/64 scope link
valid_lft forever preferred_lft forever
9: vlan2613: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether fa:2d:8b:7b:f1:21 brd ff:ff:ff:ff:ff:ff
inet 172.26.13.20/24 brd 172.26.13.255 scope global vlan2613
valid_lft forever preferred_lft forever
inet6 fe80::f82d:8bff:fe7b:f121/64 scope link
valid_lft forever preferred_lft forever
10: vlan2614: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether c2:ea:76:13:4e:16 brd ff:ff:ff:ff:ff:ff
inet 172.26.14.18/24 brd 172.26.14.255 scope global vlan2614
valid_lft forever preferred_lft forever
inet6 fe80::c0ea:76ff:fe13:4e16/64 scope link
valid_lft forever preferred_lft forever
11: vlan2616: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether 82:e6:64:04:d7:23 brd ff:ff:ff:ff:ff:ff
inet 172.26.16.12/24 brd 172.26.16.255 scope global vlan2616
valid_lft forever preferred_lft forever
inet6 fe80::80e6:64ff:fe04:d723/64 scope link
valid_lft forever preferred_lft forever
12: vlan2617: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether d2:74:4f:18:b5:3c brd ff:ff:ff:ff:ff:ff
inet 172.26.17.14/24 brd 172.26.17.255 scope global vlan2617
valid_lft forever preferred_lft forever
inet6 fe80::d074:4fff:fe18:b53c/64 scope link
valid_lft forever preferred_lft forever
13: vlan3010: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether 32:e2:86:b9:d2:3e brd ff:ff:ff:ff:ff:ff
inet 172.30.10.21/23 brd 172.30.11.255 scope global vlan3010
valid_lft forever preferred_lft forever
inet6 fe80::30e2:86ff:feb9:d23e/64 scope link
valid_lft forever preferred_lft forever
14: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether f2:7e:78:3c:ee:49 brd ff:ff:ff:ff:ff:ff
15: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether a2:4d:a0:64:3a:4e brd ff:ff:ff:ff:ff:ff
16: gre0#NONE: <NOARP> mtu 1476 qdisc noop state DOWN qlen 1
link/gre 0.0.0.0 brd 0.0.0.0
17: gretap0#NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
18: gre_sys#NONE: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65490 qdisc pfifo_fast master ovs-system state UNKNOWN qlen 1000
link/ether f6:71:95:be:da:53 brd ff:ff:ff:ff:ff:ff
inet6 fe80::f471:95ff:febe:da53/64 scope link
valid_lft forever preferred_lft forever
Controller-0 OVS Bridge :
qg is external interface of SDN router
qr is internal interface of SDN router
These interfaces are directly created inside the br-int. In older versions of RHOSP. There is no patch between the br-int and br-ex. So the qg will be created directly in br-ex. In this version, we find that both interfaces are created inside the br-int, if i change the external bridge as br-int in all L3 agents, then the router interfaces shows down. Even-though all the communication of ping and ssh happens inside the qrouter namespaces itself.
[heat-admin#overcloud-controller-0 ~]$ sudo ovs-vsctl show
f6411a64-6dbd-4a7d-931a-6a99b63d7911
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "qg-0f094325-6c"
tag: 10
Interface "qg-0f094325-6c"
type: internal
Port "qr-fff1e03e-44"
tag: 8
Interface "qr-fff1e03e-44"
type: internal
Port "tapef7874a7-a3"
tag: 8
Interface "tapef7874a7-a3"
type: internal
Port "ha-a3430c62-90"
tag: 4095
Interface "ha-a3430c62-90"
type: internal
Port "ha-37bad2be-92"
tag: 9
Interface "ha-37bad2be-92"
type: internal
Port "tap102385e5-b7"
tag: 4
Interface "tap102385e5-b7"
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "gre-ac1a0d0f"
Interface "gre-ac1a0d0f"
type: gre
options: {df_default="true", in_key=flow, local_ip="172.26.13.20", out_key=flow, remote_ip="172.26.13.15"}
Port "gre-ac1a0d10"
Interface "gre-ac1a0d10"
type: gre
options: {df_default="true", in_key=flow, local_ip="172.26.13.20", out_key=flow, remote_ip="172.26.13.16"}
Port "gre-ac1a0d16"
Interface "gre-ac1a0d16"
type: gre
options: {df_default="true", in_key=flow, local_ip="172.26.13.20", out_key=flow, remote_ip="172.26.13.22"}
Port br-tun
Interface br-tun
type: internal
Port "gre-ac1a0d0c"
Interface "gre-ac1a0d0c"
type: gre
options: {df_default="true", in_key=flow, local_ip="172.26.13.20", out_key=flow, remote_ip="172.26.13.12"}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "vlan2617"
tag: 2617
Interface "vlan2617"
type: internal
Port "vlan2612"
tag: 2612
Interface "vlan2612"
type: internal
Port "vlan2613"
tag: 2613
Interface "vlan2613"
type: internal
Port br-ex
Interface br-ex
type: internal
Port "vlan3010"
tag: 3010
Interface "vlan3010"
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "vlan2614"
tag: 2614
Interface "vlan2614"
type: internal
Port "vlan2616"
tag: 2616
Interface "vlan2616"
type: internal
Port "bond1"
Interface "em2"
Interface "em3"
ovs_version: "2.6.1"
Neutron Agent List
[heat-admin#overcloud-controller-0 ~]$ neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------+--------------------+--------------------------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------------------------+--------------------+--------------------------------+-------------------+-------+----------------+---------------------------+
| 08afba9b-1952-4c43-a3ec- | Open vSwitch agent | overcloud- | | :-) | True | neutron-openvswitch-agent |
| 1b6a1cf49370 | | controller-1.localdomain | | | | |
| 1c7794b0-726c-4d70-81bc- | Metadata agent | overcloud- | | :-) | True | neutron-metadata-agent |
| df761ad105bd | | controller-1.localdomain | | | | |
| 23aba452-ecb2-4d61-96b5-f8224c | Open vSwitch agent | overcloud- | | :-) | True | neutron-openvswitch-agent |
| 6de482 | | controller-0.localdomain | | | | |
| 2acabaa4-cad1-4e25-b102-fe5f72 | DHCP agent | overcloud- | nova | :-) | True | neutron-dhcp-agent |
| 0de5b8 | | controller-2.localdomain | | | | |
| 38074c45-565c-45bb- | Open vSwitch agent | overcloud- | | :-) | True | neutron-openvswitch-agent |
| ae21-c636c9df73b1 | | controller-2.localdomain | | | | |
| 58b8a5bd-e438-4cb5-9267-ad87c6 | DHCP agent | overcloud- | nova | :-) | True | neutron-dhcp-agent |
| 10dbb3 | | controller-1.localdomain | | | | |
| 5fbe010b-34af- | Metadata agent | overcloud- | | :-) | True | neutron-metadata-agent |
| 4a14-9965-393f37587682 | | controller-0.localdomain | | | | |
| 6e1d3d2a- | Metadata agent | overcloud- | | :-) | True | neutron-metadata-agent |
| 6ec4-47ab-8639-2ae945b19adc | | controller-2.localdomain | | | | |
| 901c0300-5081-412d- | L3 agent | overcloud- | nova | :-) | True | neutron-l3-agent |
| a7e8-2e77acc098bf | | controller-2.localdomain | | | | |
| b0b47dfb- | DHCP agent | overcloud- | nova | :-) | True | neutron-dhcp-agent |
| 7d78-46e3-9c22-b1172989cfef | | controller-0.localdomain | | | | |
| cb0b6b69-320d-48dd- | L3 agent | overcloud- | nova | :-) | True | neutron-l3-agent |
| b3e3-f504889edae9 | | controller-0.localdomain | | | | |
| cdf555d7-0537-4bdc- | Open vSwitch agent | overcloud- | | :-) | True | neutron-openvswitch-agent |
| bf77-5abe77709fe3 | | compute-0.localdomain | | | | |
| ddd0bb3e-0429-4e10-8adb- | L3 agent | overcloud- | nova | :-) | True | neutron-l3-agent |
| b81233e75ac0 | | controller-1.localdomain | | | | |
| e7524f86-81e4-46e5-ab2c- | Open vSwitch agent | overcloud- | | :-) | True | neutron-openvswitch-agent |
| d6311427369d | | compute-1.localdomain | | | | |
+--------------------------------+--------------------+--------------------------------+-------------------+-------+----------------+---------------------------+
One of the L3 Agent:
[heat-admin#overcloud-controller-0 ~]$ neutron agent-show 901c0300-5081-412d-a7e8-2e77acc098bf
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+---------------------+-------------------------------------------------------------------------------+
| Field | Value |
+---------------------+-------------------------------------------------------------------------------+
| admin_state_up | True |
| agent_type | L3 agent |
| alive | True |
| availability_zone | nova |
| binary | neutron-l3-agent |
| configurations | { |
| | "agent_mode": "legacy", |
| | "gateway_external_network_id": "", |
| | "handle_internal_only_routers": true, |
| | "routers": 1, |
| | "interfaces": 1, |
| | "floating_ips": 1, |
| | "interface_driver": "neutron.agent.linux.interface.OVSInterfaceDriver", |
| | "log_agent_heartbeats": false, |
| | "external_network_bridge": "", |
| | "ex_gw_ports": 1 |
| | } |
| created_at | 2018-02-01 06:54:56 |
| description | |
| heartbeat_timestamp | 2018-02-02 13:25:52 |
| host | overcloud-controller-2.localdomain |
| id | 901c0300-5081-412d-a7e8-2e77acc098bf |
| started_at | 2018-02-02 11:02:27 |
| topic | l3_agent |
+---------------------+-------------------------------------------------------------------------------+
Neutron Router and DHCP Agent.
Neutron Virtual DHCP agent is available is used to ping to the SDN router gateway
[heat-admin#overcloud-controller-0 ~]$ ip netns
qrouter-bb4d96e5-07e1-4ad6-b120-f11c6a2298eb
qdhcp-2cee840e-f683-48ed-a05f-ac993f6cac10
Router Gateway using QDHCP
[heat-admin#overcloud-controller-0 ~]$ sudo ip netns exec qdhcp-2cee840e-f683-48ed-a05f-ac993f6cac10 ping 172.30.10.173
PING 172.30.10.173 (172.30.10.173) 56(84) bytes of data.
64 bytes from 172.30.10.173: icmp_seq=1 ttl=64 time=1.16 ms
64 bytes from 172.30.10.173: icmp_seq=2 ttl=64 time=0.090 ms
64 bytes from 172.30.10.173: icmp_seq=3 ttl=64 time=0.092 ms
^Z
[1]+ Stopped sudo ip netns exec qdhcp-2cee840e-f683-48ed-a05f-ac993f6cac10 ping 172.30.10.173
Floating IP of a Instance using QDHCP
[heat-admin#overcloud-controller-0 ~]$ sudo ip netns exec qdhcp-2cee840e-f683-48ed-a05f-ac993f6cac10 ping 172.30.10.178
PING 172.30.10.178 (172.30.10.178) 56(84) bytes of data.
From 172.30.10.178 icmp_seq=1 Destination Host Unreachable
From 172.30.10.178 icmp_seq=2 Destination Host Unreachable
From 172.30.10.178 icmp_seq=3 Destination Host Unreachable
From 172.30.10.178 icmp_seq=4 Destination Host Unreachable
^C
--- 172.30.10.178 ping statistics ---
6 packets transmitted, 0 received, +4 errors, 100% packet loss, time 5000ms
pipe 4
Router Gateway using QROUTER
[heat-admin#overcloud-controller-0 ~]$ sudo ip netns exec qrouter-bb4d96e5-07e1-4ad6-b120-f11c6a2298eb ping 172.30.10.173
PING 172.30.10.173 (172.30.10.173) 56(84) bytes of data.
64 bytes from 172.30.10.173: icmp_seq=1 ttl=64 time=0.115 ms
64 bytes from 172.30.10.173: icmp_seq=2 ttl=64 time=0.061 ms
64 bytes from 172.30.10.173: icmp_seq=3 ttl=64 time=0.063 ms
64 bytes from 172.30.10.173: icmp_seq=4 ttl=64 time=0.056 ms
^Z
[5]+ Stopped sudo ip netns exec qrouter-bb4d96e5-07e1-4ad6-b120-f11c6a2298eb ping 172.30.10.173
Floating IP of a Instance using QROUTER
[heat-admin#overcloud-controller-0 ~]$ sudo ip netns exec qrouter-bb4d96e5-07e1-4ad6-b120-f11c6a2298eb ping 172.30.10.178
PING 172.30.10.178 (172.30.10.178) 56(84) bytes of data.
From 172.30.10.178 icmp_seq=1 Destination Host Unreachable
From 172.30.10.178 icmp_seq=2 Destination Host Unreachable
From 172.30.10.178 icmp_seq=3 Destination Host Unreachable
From 172.30.10.178 icmp_seq=4 Destination Host Unreachable
^Z
[6]+ Stopped sudo ip netns exec qrouter-bb4d96e5-07e1-4ad6-b120-f11c6a2298eb ping 172.30.10.178
Route of QRouter
[heat-admin#overcloud-controller-0 ~]$ sudo ip netns exec qrouter-bb4d96e5-07e1-4ad6-b120-f11c6a2298eb route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 0 0 0 qg-e8f74c7c-58
30.30.30.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-6a11beee-45
link-local 0.0.0.0 255.255.255.0 U 0 0 0 ha-4ad3b415-1b
169.254.192.0 0.0.0.0 255.255.192.0 U 0 0 0 ha-4ad3b415-1b
172.30.10.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-e8f74c7c-58
IP Route of QRouter
[heat-admin#overcloud-controller-0 ~]$ sudo ip netns exec qrouter-bb4d96e5-07e1-4ad6-b120-f11c6a2298eb ip route
default via 172.30.10.10 dev qg-e8f74c7c-58
30.30.30.0/24 dev qr-6a11beee-45 proto kernel scope link src 30.30.30.254
169.254.0.0/24 dev ha-4ad3b415-1b proto kernel scope link src 169.254.0.1
169.254.192.0/18 dev ha-4ad3b415-1b proto kernel scope link src 169.254.192.3
172.30.10.0/24 dev qg-e8f74c7c-58 proto kernel scope link src 172.30.10.173
Router Gateway IP & Floating IP
Router gateway IP and floating ip is assigned for qg
[heat-admin#overcloud-controller-0 ~]$ sudo ip netns exec qrouter-bb4d96e5-07e1-4ad6-b120-f11c6a2298eb ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: gre0#NONE: <NOARP> mtu 1476 qdisc noop state DOWN qlen 1
link/gre 0.0.0.0 brd 0.0.0.0
3: gretap0#NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
21: ha-4ad3b415-1b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether fa:16:3e:08:33:4b brd ff:ff:ff:ff:ff:ff
inet 169.254.192.3/18 brd 169.254.255.255 scope global ha-4ad3b415-1b
valid_lft forever preferred_lft forever
inet 169.254.0.1/24 scope global ha-4ad3b415-1b
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe08:334b/64 scope link
valid_lft forever preferred_lft forever
22: qg-e8f74c7c-58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether fa:16:3e:90:73:04 brd ff:ff:ff:ff:ff:ff
inet 172.30.10.173/24 scope global qg-e8f74c7c-58
valid_lft forever preferred_lft forever
inet 172.30.10.178/32 scope global qg-e8f74c7c-58
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe90:7304/64 scope link
valid_lft forever preferred_lft forever
23: qr-6a11beee-45: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether fa:16:3e:cd:08:bf brd ff:ff:ff:ff:ff:ff
inet 30.30.30.254/24 scope global qr-6a11beee-45
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fecd:8bf/64 scope link
valid_lft forever preferred_lft forever
Expected Answer:
We should be able to take the machine floating IP externally.
We are not able to ping the floating IP assigned to the instance.