How to attach vhostuser port to VM: ports are not being shown in VM - openstack

I am struggling with attaching OVS-DPDK ports to my VM.
I am new to openstack, OVS-DPDK and here is my current setup:
I have created a VM with ports of physnets which are SRIOV port.
I have other 2 ports which will be associated to OVS-DPDK. OVS-DPDK installed and have done below steps ( ovs-vswitchd (Open vSwitch) 2.17.0 DPDK 21.11.0)
Binding UIO driver for NIC port
dpdk-devbind.py -b vfio-pci 08:00.0
dpdk-devbind.py -b vfio-pci 08:00.1
Binding this DPDK port to OVS, called dpdkport
ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
ovs-vsctl add-port br0 dpdk-p0 -- set Interface dpdk-p0 type=dpdk ofport_request=1 options:dpdk-devargs=0000:08:00.0
ovs-vsctl add-port br0 dpdk-p1 -- set Interface dpdk-p1 type=dpdk ofport_request=2 options:dpdk-devargs=0000:08:00.1
/usr/libexec/qemu-kvm -name guest=instance-0000000c -chardev socket,id=char1,path=/var/run/dpdkvhostclient1,server -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 -object memory-backend-file,id=mem1,size=0x8000000,mem-path=/dev/hugepages,share=on -numa node,memdev=mem1 -mem-prealloc &
and
/usr/libexec/qemu-kvm -name guest=instance-0000000c -chardev socket,id=char1,path=/var/run/dpdkvhostclient2,server -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet1 -object memory-backend-file,id=mem1,size=0x8000000,mem-path=/dev/hugepages,share=on -numa node,memdev=mem1 -mem-prealloc &
Add a vhostuser port to OVS
ovs-vsctl add-port br0 dpdkvhostclient1 -- set Interface dpdkvhostclient1 type=dpdkvhostuserclient ofport_request=3 options:vhost-server-path=/var/run/dpdkvhostclient1
ovs-vsctl add-port br0 dpdkvhostclient2 -- set Interface dpdkvhostclient2 type=dpdkvhostuserclient ofport_request=4 options:vhost-server-path=/var/run/dpdkvhostclient2
Add a flow that forwarding PKT from vhostuser to dpdkport
ovs-ofctl del-flows br0
ovs-ofctl add-flow br0 in_port=1,actions=output:3
ovs-ofctl add-flow br0 in_port=2,actions=output:4
ovs-ofctl add-flow br0 in_port=3,actions=output:1
ovs-ofctl add-flow br0 in_port=4,actions=output:2
Logged into VM and I don't see any of dpdk port being shown in ipconfig -a also.
I am following https://docs.openvswitch.org/en/latest/topics/dpdk/vhost-user/#dpdk-vhost-user-client
I also tried putting in xml of my VM instance
<cpu mode='host-model' check='partial'>
<model fallback='allow'/>
<topology sockets='6' cores='1' threads='1'/>
<numa>
<cell id='0' cpus='0-5' memory='4096' unit='KiB' memAccess='shared'/>
</numa>
</cpu>
<memoryBacking>
<hugepages>
<page size='2048' unit='G'/>
</hugepages>
<locked/>
<source type='file'/>
<access mode='shared'/>
<allocation mode='immediate'/>
<discard/>
</memoryBacking>
<interface type='vhostuser'>
<mac address='0c:c4:7a:ea:4b:b2'/>
<source type='unix' path='/var/run/dpdkvhostclient1' mode='server'/>
<target dev='dpdkvhostclient1'/>
<model type='virtio'/>
<driver queues='2'>
<host mrg_rxbuf='on'/>
</driver>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</interface>
<interface type='vhostuser'>
<mac address='0c:c4:7a:ea:4b:b3'/>
<source type='unix' path='/var/run/dpdkvhostclient2' mode='server'/>
<target dev='dpdkvhostclient2'/>
<model type='virtio'/>
<driver queues='2'>
<host mrg_rxbuf='on'/>
</driver>
<address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0'/>
</interface>
Mac addresses of these dpdkuserports are some random and slots are also the one which were not present in xml. NUMA block was added in CPU section and memoryBacking was also added, rebooted instance hen but new interfaces didnt appear in VM.
dpduserport were shown DOWN as below
ovs-ofctl show br0
OFPT_FEATURES_REPLY (xid=0x2): dpid:00001cfd0870760c
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(dpdk-p0): addr:1c:fd:08:70:76:0c
config: 0
state: 0
current: 1GB-FD AUTO_NEG
speed: 1000 Mbps now, 0 Mbps max
2(dpdk-p1): addr:1c:fd:08:70:76:0d
config: 0
state: 0
current: 1GB-FD AUTO_NEG
speed: 1000 Mbps now, 0 Mbps max
3(dpdkvhostclient): addr:00:00:00:00:00:00
config: 0
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
4(dpdkvhostclient): addr:00:00:00:00:00:00
config: 0
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
LOCAL(br0): addr:1c:fd:08:70:76:0c
config: PORT_DOWN
state: LINK_DOWN
current: 10MB-FD COPPER
speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
What Am I missing ?

Related

Upload speed very slow in Ubuntu 18.04.6 while download speed is normal

I bought a new 1GB/ps server from OneProvider and installed Ubuntu 18.04.6 on it.
The upload speed from ssh or FTP is very good, but the download speed is about 100kb/s from ssh, FTP and I tried to install Nginx and download from it but it's also about 100kb/ps.
All attempts from more than 5 devices from different locations some of these tried were from another server in the same network with (wget) but all attempts did not exceed the speed of 150kb/s.
this is (ip a) output :
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether d4:ae:52:ca:0f:6e brd ff:ff:ff:ff:ff:ff
inet (serverip)/24 brd 62.210.207.255 scope global eno1
valid_lft forever preferred_lft forever
inet6 fe80::d6ae:52ff:feca:f6e/64 scope link
valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d4:ae:52:ca:0f:6f brd ff:ff:ff:ff:ff:ff
(ethtool eno1)output :
Settings for eno1:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: on
Supports Wake-on: g
Wake-on: d
Link detected: yes
(ethtool -S eno1)output :
NIC statistics:
rx_bytes: 67518474469
rx_error_bytes: 0
tx_bytes: 1939892582744
tx_error_bytes: 0
rx_ucast_packets: 457688996
rx_mcast_packets: 1105671
rx_bcast_packets: 743858
tx_ucast_packets: 1341579130
tx_mcast_packets: 12
tx_bcast_packets: 4
tx_mac_errors: 0
tx_carrier_errors: 0
rx_crc_errors: 0
rx_align_errors: 0
tx_single_collisions: 0
tx_multi_collisions: 0
tx_deferred: 0
tx_excess_collisions: 0
tx_late_collisions: 0
tx_total_collisions: 0
rx_fragments: 0
rx_jabbers: 0
rx_undersize_packets: 0
rx_oversize_packets: 0
rx_64_byte_packets: 4346996
rx_65_to_127_byte_packets: 430360977
rx_128_to_255_byte_packets: 1072678
rx_256_to_511_byte_packets: 420201
rx_512_to_1023_byte_packets: 250311
rx_1024_to_1522_byte_packets: 23087362
rx_1523_to_9022_byte_packets: 0
tx_64_byte_packets: 899130
tx_65_to_127_byte_packets: 11634758
tx_128_to_255_byte_packets: 2699608
tx_256_to_511_byte_packets: 3443633
tx_512_to_1023_byte_packets: 7211982
tx_1024_to_1522_byte_packets: 1315690035
tx_1523_to_9022_byte_packets: 0
rx_xon_frames: 0
rx_xoff_frames: 0
tx_xon_frames: 0
tx_xoff_frames: 0
rx_mac_ctrl_frames: 0
rx_filtered_packets: 113311
rx_ftq_discards: 0
rx_discards: 0
rx_fw_discards: 0
(ifconfig eno1 |grep errors) output :
RX errors 0 dropped 93 overruns 0 frame 0
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
(lshw -C network) output :
*-network:0
description: Ethernet interface
product: NetXtreme II BCM5716 Gigabit Ethernet
vendor: Broadcom Inc. and subsidiaries
physical id: 0
bus info: pci#0000:01:00.0
logical name: eno1
version: 20
serial: d4:ae:52:ca:0f:6e
size: 1Gbit/s
capacity: 1Gbit/s
width: 64 bits
clock: 33MHz
capabilities: pm vpd msi msix pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=bnx2 driverversion=2.2.6 duplex=full firmware=7.4.8 bc 7.4.0 NCSI 2.0.11 ip=(serverip) latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:16 memory:c0000000-c1ffffff
*-network:1 DISABLED
description: Ethernet interface
product: NetXtreme II BCM5716 Gigabit Ethernet
vendor: Broadcom Inc. and subsidiaries
physical id: 0.1
bus info: pci#0000:01:00.1
logical name: eno2
version: 20
serial: d4:ae:52:ca:0f:6f
capacity: 1Gbit/s
width: 64 bits
clock: 33MHz
capabilities: pm vpd msi msix pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=bnx2 driverversion=2.2.6 duplex=half firmware=7.4.8 bc 7.4.0 NCSI 2.0.11 latency=0 link=no multicast=yes port=twisted pair
resources: irq:17 memory:c2000000-c3ffffff

VNF do not forward packets sent from Client in Openstack using VNFF Graph

I'm trying to ping from Client to 8.8.8.8 via VNF1 so I use VNFFG to force ICMP traffic of Client go through VNF1 before going out to internet.
After I apply the VNFFG rule in openstack, VNF1 can see MPLS packet encapsulated from Client's ICMP packet by openstack when I use tcpdump but the Forwarding Table of VNF1 do not receive any packet to continue forward that packet.
This is packet seen on VNF1:
09:15:12.161830 MPLS (label 13311, exp 0, [S], ttl 255) IP 12.0.0.58 > 8.8.8.8: ICMP echo request, id 10531, seq 15, length 64
I capture that packet, see that the content can be read (without encryption) and src, dst MAC belong to Client and VNF1 respectively.
This is my VNFFG template:
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Sample VNFFG template
topology_template:
node_templates:
Forwarding_path1:
type: tosca.nodes.nfv.FP.TackerV2
description: demo chain
properties:
id: 51
policy:
type: ACL
criteria:
- name: block_icmp
classifier:
network_src_port_id: 0304e8b5-6c37-4634-bde2-1351cdee5134 #CLIENT PORT ID
ip_proto: 1
- name: block_udp
classifier:
network_src_port_id: 0304e8b5-6c37-4634-bde2-1351cdee5134 #CLIENT PORT ID
ip_proto: 17
path:
- forwarder: VNF1
capability: CP1
groups:
VNFFG1:
type: tosca.groups.nfv.VNFFG
description: Traffic to server
properties:
vendor: tacker
version: 1.0
number_of_endpoints: 1
dependent_virtual_link: [VL1]
connection_point: [CP1]
constituent_vnfs: [VNF1]
members: [Forwarding_path1]
This is my VNF Descriptor:
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Demo example
metadata:
template_name: sample-tosca-vnfd
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
num_cpus: 1
mem_size: 2 GB
disk_size: 20 GB
properties:
image: VNF1
availability_zone: nova
mgmt_driver: noop
key_name: my-key-pair
config: |
param0: key1
param1: key2
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
order: 0
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: my-private-network
vendor: Tacker
FIP1:
type: tosca.nodes.network.FloatingIP
properties:
floating_network: public
requirements:
- link:
node: CP1
I used this command to deploy VNFGG rule:
tacker vnffg-create --vnffgd-template vnffg_test.yaml forward_traffic
I do not know if the problem can come from the key I defined for VNF1 because I do not know what param0: key0 and param1: key1 used for and where are they?
How can I resolve to make the VNF forward these packet.

eth1 disappear after use new kernel 5.6.0 | uSID | Centos

In order to test SRv6 uSID in Linux, I compiled the new kernel 5.6.0 that in following Github:
https://github.com/netgroup/srv6-usid-linux-kernel.git
After compiled and reboot, my 2nd network adapter port(eth1) disappeared, two network adapter ports should the same type, and only eth0 was renamed to ens3, as follow:
[root#frank cisco]# uname -a
Linux frank 5.6.0+ #3 SMP Tue Jun 30 17:32:20 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
[root#frank cisco]# dmesg |grep eth
[ 2.311925] e1000 0000:00:03.0 eth0: (PCI:33MHz:32-bit) 5e:00:00:00:00:00
[ 2.314897] e1000 0000:00:03.0 eth0: Intel(R) PRO/1000 Network Connection
[ 2.770167] e1000 0000:00:04.0 eth1: (PCI:33MHz:32-bit) fa:16:3e:38:fd:91
[ 2.773194] e1000 0000:00:04.0 eth1: Intel(R) PRO/1000 Network Connection
[ 5.352825] e1000 0000:00:03.0 ens3: renamed from eth0
[root#frank cisco]#
[root#frank cisco]# lshw -class network -businfo
Bus info Device Class Description
========================================================
pci#0000:00:03.0 ens3 network 82540EM Gigabit Ethernet Controller
pci#0000:00:04.0 network 82540EM Gigabit Ethernet Controller
Follow is dmesg for two ports:
[root#frank cisco]# dmesg |grep 00:03.0
[ 0.700489] pci 0000:00:03.0: [8086:100e] type 00 class 0x020000
[ 0.702057] pci 0000:00:03.0: reg 0x10: [mem 0xfeb80000-0xfeb9ffff]
[ 0.703921] pci 0000:00:03.0: reg 0x14: [io 0xc000-0xc03f]
[ 0.707532] pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb3ffff pref]
[ 2.311925] e1000 0000:00:03.0 eth0: (PCI:33MHz:32-bit) 5e:00:00:00:00:00
[ 2.314897] e1000 0000:00:03.0 eth0: Intel(R) PRO/1000 Network Connection
[ 5.352825] e1000 0000:00:03.0 ens3: renamed from eth0
[root#frank cisco]#
[root#frank cisco]# dmesg |grep 00:04.0
[ 0.708456] pci 0000:00:04.0: [8086:100e] type 00 class 0x020000
[ 0.710057] pci 0000:00:04.0: reg 0x10: [mem 0xfeba0000-0xfebbffff]
[ 0.711846] pci 0000:00:04.0: reg 0x14: [io 0xc040-0xc07f]
[ 0.715515] pci 0000:00:04.0: reg 0x30: [mem 0xfeb40000-0xfeb7ffff pref]
[ 2.770167] e1000 0000:00:04.0 eth1: (PCI:33MHz:32-bit) fa:16:3e:38:fd:91
[ 2.773194] e1000 0000:00:04.0 eth1: Intel(R) PRO/1000 Network Connection
Follow lshw cmd
"driver=uio_pci_generic"
[root#frank v2.81]# lshw -c network
*-network:0
description: Ethernet interface
product: 82540EM Gigabit Ethernet Controller
vendor: Intel Corporation
physical id: 3
bus info: pci#0000:00:03.0
logical name: ens3
version: 03
serial: 5e:00:00:00:00:00
size: 1Gbit/s
capacity: 1Gbit/s
width: 32 bits
clock: 33MHz
capabilities: bus_master rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k8-NAPI duplex=full ip=172.16.1.140 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:10 memory:feb80000-feb9ffff ioport:c000(size=64) memory:feb00000-feb3ffff
*-network:1
description: Ethernet controller
product: 82540EM Gigabit Ethernet Controller
vendor: Intel Corporation
physical id: 4
bus info: pci#0000:00:04.0
version: 03
width: 32 bits
clock: 33MHz
capabilities: bus_master rom
configuration: driver=uio_pci_generic latency=0 <<<
resources: irq:11 memory:feba0000-febbffff ioport:c040(size=64) memory:feb40000-feb7ffff
And found the port bound by dpdk, but I didn't set any bound config...
[root#frank v2.81]# ./dpdk_setup_ports.py -s
Network devices using DPDK-compatible driver
============================================
0000:00:04.0 '82540EM Gigabit Ethernet Controller' drv=uio_pci_generic unused=e1000,igb_uio,vfio-pci <<<
Network devices using kernel driver
===================================
0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3 drv=e1000 unused=igb_uio,vfio-pci,uio_pci_generic
Other network devices
=====================
<none>
Does anyone know what is going on...and how to solve this problem...?
Thanks a lot!
Frank
After discussed with colleagues, the issue should be followed by this link:
https://www.kernel.org/doc/html/v4.12/driver-api/uio-howto.html
And as above guide, I can workaround the issue, but issue appear again after reboot...
[root#frank v2.81]# ls -l /sys/bus/pci/devices/0000:00:04.0/driver
lrwxrwxrwx. 1 root root 0 Jun 30 17:59 /sys/bus/pci/devices/0000:00:04.0/driver -> ../../../bus/pci/drivers/uio_pci_generic
[root#frank v2.81]# echo -n 0000:00:04.0 > /sys/bus/pci/drivers/uio_pci_generic/unbind
[root#frank v2.81]# echo -n 0000:00:04.0 > /sys/bus/pci/drivers/e1000/bind
[79965.358393] e1000 0000:00:04.0 eth0: (PCI:33MHz:32-bit) fa:16:3e:38:fd:91
[79965.360499] e1000 0000:00:04.0 eth0: Intel(R) PRO/1000 Network Connection
[root#frank v2.81]# ls -l /sys/bus/pci/devices/0000:00:04.0/driver
lrwxrwxrwx. 1 root root 0 Jul 1 16:12 /sys/bus/pci/devices/0000:00:04.0/driver -> ../../../bus/pci/drivers/e1000
[root#frank cisco]# ifconfig eth0 up
[ 221.792886] e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[ 221.796553] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[root#frank cisco]# lshw -c network
*-network:0
description: Ethernet interface
product: 82540EM Gigabit Ethernet Controller
vendor: Intel Corporation
physical id: 3
bus info: pci#0000:00:03.0
logical name: ens3
version: 03
serial: 5e:00:00:00:00:00
size: 1Gbit/s
capacity: 1Gbit/s
width: 32 bits
clock: 33MHz
capabilities: bus_master rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k8-NAPI duplex=full ip=172.16.1.140 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:11 memory:feb80000-feb9ffff ioport:c000(size=64) memory:feb00000-feb3ffff
*-network:1
description: Ethernet interface
product: 82540EM Gigabit Ethernet Controller
vendor: Intel Corporation
physical id: 4
bus info: pci#0000:00:04.0
logical name: eth0
version: 03
serial: fa:16:3e:38:fd:91
size: 1Gbit/s
capacity: 1Gbit/s
width: 32 bits
clock: 33MHz
capabilities: bus_master rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k8-NAPI duplex=full latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:11 memory:feba0000-febbffff ioport:c040(size=64) memory:feb40000-feb7ffff

Floating IP not pinging externally

I have successfully deployed everything in Redhat Openstack 11 with following settings. I was not able to ping the floating IP externally rather i can perform ping, ssh and other things using namespace.
I have three controllers and two hypercoverged Compute.
VLAN for RHOSP 11 Setup
172.26.11.0/24 - Provision Network ( VLAN2611 )
172.26.12.0/24 - Internal Network ( VLAN2612 )
172.26.13.0/24 - Tentant Network ( VLAN2613 )
172.26.14.0/24 - Storage Network ( VLAN2614 )
172.26.16.0/24 - Storage Managment ( VLAN2616 )
172.26.17.0/24 - Management Network ( VLAN2617 )
172.30.10.0/23 - External Network ( VLAN3010 )
Server Setup:
[stack#director ~]$ nova list
+--------------------------------------+------------------------+--------+------------+-------------+-----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------------------+--------+------------+-------------+-----------------------+
| 3e37a6ed-1b0a-49de-9aa8-5515949ad11a | overcloud-compute-0 | ACTIVE | - | Running | ctlplane=172.26.11.13 |
| 3bab2815-1df8-4b1a-ab70-fa1d00dd5889 | overcloud-compute-1 | ACTIVE | - | Running | ctlplane=172.26.11.25 |
| 531cc5ad-ceb2-40c4-9662-1a984eea1907 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=172.26.11.12 |
| 598cb725-ed9d-4e7f-b8d1-3d5ac0df86d8 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=172.26.11.23 |
| a92cbacd-301e-4201-aa74-b100eb245345 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=172.26.11.28 |
+--------------------------------------+------------------------+--------+------------+-------------+-----------------------+
Controller-0 IP's Assigned:
All other two controllers will have the same IP address configuration.
[stack#director ~]$ ssh heat-admin#172.26.11.12
Last login: Wed Feb 14 09:23:13 2018 from 172.26.11.254
[heat-admin#overcloud-controller-0 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether c8:1f:66:e1:1a:c3 brd ff:ff:ff:ff:ff:ff
inet 172.26.11.12/24 brd 172.26.11.255 scope global em1
valid_lft forever preferred_lft forever
inet 172.26.11.22/32 brd 172.26.11.255 scope global em1
valid_lft forever preferred_lft forever
inet6 fe80::ca1f:66ff:fee1:1ac3/64 scope link
valid_lft forever preferred_lft forever
3: em2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000
link/ether c8:1f:66:e1:1a:c4 brd ff:ff:ff:ff:ff:ff
inet6 fe80::ca1f:66ff:fee1:1ac4/64 scope link
valid_lft forever preferred_lft forever
4: em3: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000
link/ether c8:1f:66:e1:1a:c5 brd ff:ff:ff:ff:ff:ff
inet6 fe80::ca1f:66ff:fee1:1ac5/64 scope link
valid_lft forever preferred_lft forever
5: em4: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether c8:1f:66:e1:1a:c6 brd ff:ff:ff:ff:ff:ff
6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether c6:05:34:74:27:e0 brd ff:ff:ff:ff:ff:ff
7: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether c8:1f:66:e1:1a:c4 brd ff:ff:ff:ff:ff:ff
inet6 fe80::800e:f6ff:fe6d:245/64 scope link
valid_lft forever preferred_lft forever
8: vlan2612: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether 9a:12:3a:34:7a:7c brd ff:ff:ff:ff:ff:ff
inet 172.26.12.12/24 brd 172.26.12.255 scope global vlan2612
valid_lft forever preferred_lft forever
inet 172.26.12.18/32 brd 172.26.12.255 scope global vlan2612
valid_lft forever preferred_lft forever
inet6 fe80::9812:3aff:fe34:7a7c/64 scope link
valid_lft forever preferred_lft forever
9: vlan2613: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether fa:2d:8b:7b:f1:21 brd ff:ff:ff:ff:ff:ff
inet 172.26.13.20/24 brd 172.26.13.255 scope global vlan2613
valid_lft forever preferred_lft forever
inet6 fe80::f82d:8bff:fe7b:f121/64 scope link
valid_lft forever preferred_lft forever
10: vlan2614: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether c2:ea:76:13:4e:16 brd ff:ff:ff:ff:ff:ff
inet 172.26.14.18/24 brd 172.26.14.255 scope global vlan2614
valid_lft forever preferred_lft forever
inet6 fe80::c0ea:76ff:fe13:4e16/64 scope link
valid_lft forever preferred_lft forever
11: vlan2616: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether 82:e6:64:04:d7:23 brd ff:ff:ff:ff:ff:ff
inet 172.26.16.12/24 brd 172.26.16.255 scope global vlan2616
valid_lft forever preferred_lft forever
inet6 fe80::80e6:64ff:fe04:d723/64 scope link
valid_lft forever preferred_lft forever
12: vlan2617: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether d2:74:4f:18:b5:3c brd ff:ff:ff:ff:ff:ff
inet 172.26.17.14/24 brd 172.26.17.255 scope global vlan2617
valid_lft forever preferred_lft forever
inet6 fe80::d074:4fff:fe18:b53c/64 scope link
valid_lft forever preferred_lft forever
13: vlan3010: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether 32:e2:86:b9:d2:3e brd ff:ff:ff:ff:ff:ff
inet 172.30.10.21/23 brd 172.30.11.255 scope global vlan3010
valid_lft forever preferred_lft forever
inet6 fe80::30e2:86ff:feb9:d23e/64 scope link
valid_lft forever preferred_lft forever
14: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether f2:7e:78:3c:ee:49 brd ff:ff:ff:ff:ff:ff
15: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether a2:4d:a0:64:3a:4e brd ff:ff:ff:ff:ff:ff
16: gre0#NONE: <NOARP> mtu 1476 qdisc noop state DOWN qlen 1
link/gre 0.0.0.0 brd 0.0.0.0
17: gretap0#NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
18: gre_sys#NONE: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65490 qdisc pfifo_fast master ovs-system state UNKNOWN qlen 1000
link/ether f6:71:95:be:da:53 brd ff:ff:ff:ff:ff:ff
inet6 fe80::f471:95ff:febe:da53/64 scope link
valid_lft forever preferred_lft forever
Controller-0 OVS Bridge :
qg is external interface of SDN router
qr is internal interface of SDN router
These interfaces are directly created inside the br-int. In older versions of RHOSP. There is no patch between the br-int and br-ex. So the qg will be created directly in br-ex. In this version, we find that both interfaces are created inside the br-int, if i change the external bridge as br-int in all L3 agents, then the router interfaces shows down. Even-though all the communication of ping and ssh happens inside the qrouter namespaces itself.
[heat-admin#overcloud-controller-0 ~]$ sudo ovs-vsctl show
f6411a64-6dbd-4a7d-931a-6a99b63d7911
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "qg-0f094325-6c"
tag: 10
Interface "qg-0f094325-6c"
type: internal
Port "qr-fff1e03e-44"
tag: 8
Interface "qr-fff1e03e-44"
type: internal
Port "tapef7874a7-a3"
tag: 8
Interface "tapef7874a7-a3"
type: internal
Port "ha-a3430c62-90"
tag: 4095
Interface "ha-a3430c62-90"
type: internal
Port "ha-37bad2be-92"
tag: 9
Interface "ha-37bad2be-92"
type: internal
Port "tap102385e5-b7"
tag: 4
Interface "tap102385e5-b7"
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "gre-ac1a0d0f"
Interface "gre-ac1a0d0f"
type: gre
options: {df_default="true", in_key=flow, local_ip="172.26.13.20", out_key=flow, remote_ip="172.26.13.15"}
Port "gre-ac1a0d10"
Interface "gre-ac1a0d10"
type: gre
options: {df_default="true", in_key=flow, local_ip="172.26.13.20", out_key=flow, remote_ip="172.26.13.16"}
Port "gre-ac1a0d16"
Interface "gre-ac1a0d16"
type: gre
options: {df_default="true", in_key=flow, local_ip="172.26.13.20", out_key=flow, remote_ip="172.26.13.22"}
Port br-tun
Interface br-tun
type: internal
Port "gre-ac1a0d0c"
Interface "gre-ac1a0d0c"
type: gre
options: {df_default="true", in_key=flow, local_ip="172.26.13.20", out_key=flow, remote_ip="172.26.13.12"}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "vlan2617"
tag: 2617
Interface "vlan2617"
type: internal
Port "vlan2612"
tag: 2612
Interface "vlan2612"
type: internal
Port "vlan2613"
tag: 2613
Interface "vlan2613"
type: internal
Port br-ex
Interface br-ex
type: internal
Port "vlan3010"
tag: 3010
Interface "vlan3010"
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "vlan2614"
tag: 2614
Interface "vlan2614"
type: internal
Port "vlan2616"
tag: 2616
Interface "vlan2616"
type: internal
Port "bond1"
Interface "em2"
Interface "em3"
ovs_version: "2.6.1"
Neutron Agent List
[heat-admin#overcloud-controller-0 ~]$ neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------+--------------------+--------------------------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------------------------+--------------------+--------------------------------+-------------------+-------+----------------+---------------------------+
| 08afba9b-1952-4c43-a3ec- | Open vSwitch agent | overcloud- | | :-) | True | neutron-openvswitch-agent |
| 1b6a1cf49370 | | controller-1.localdomain | | | | |
| 1c7794b0-726c-4d70-81bc- | Metadata agent | overcloud- | | :-) | True | neutron-metadata-agent |
| df761ad105bd | | controller-1.localdomain | | | | |
| 23aba452-ecb2-4d61-96b5-f8224c | Open vSwitch agent | overcloud- | | :-) | True | neutron-openvswitch-agent |
| 6de482 | | controller-0.localdomain | | | | |
| 2acabaa4-cad1-4e25-b102-fe5f72 | DHCP agent | overcloud- | nova | :-) | True | neutron-dhcp-agent |
| 0de5b8 | | controller-2.localdomain | | | | |
| 38074c45-565c-45bb- | Open vSwitch agent | overcloud- | | :-) | True | neutron-openvswitch-agent |
| ae21-c636c9df73b1 | | controller-2.localdomain | | | | |
| 58b8a5bd-e438-4cb5-9267-ad87c6 | DHCP agent | overcloud- | nova | :-) | True | neutron-dhcp-agent |
| 10dbb3 | | controller-1.localdomain | | | | |
| 5fbe010b-34af- | Metadata agent | overcloud- | | :-) | True | neutron-metadata-agent |
| 4a14-9965-393f37587682 | | controller-0.localdomain | | | | |
| 6e1d3d2a- | Metadata agent | overcloud- | | :-) | True | neutron-metadata-agent |
| 6ec4-47ab-8639-2ae945b19adc | | controller-2.localdomain | | | | |
| 901c0300-5081-412d- | L3 agent | overcloud- | nova | :-) | True | neutron-l3-agent |
| a7e8-2e77acc098bf | | controller-2.localdomain | | | | |
| b0b47dfb- | DHCP agent | overcloud- | nova | :-) | True | neutron-dhcp-agent |
| 7d78-46e3-9c22-b1172989cfef | | controller-0.localdomain | | | | |
| cb0b6b69-320d-48dd- | L3 agent | overcloud- | nova | :-) | True | neutron-l3-agent |
| b3e3-f504889edae9 | | controller-0.localdomain | | | | |
| cdf555d7-0537-4bdc- | Open vSwitch agent | overcloud- | | :-) | True | neutron-openvswitch-agent |
| bf77-5abe77709fe3 | | compute-0.localdomain | | | | |
| ddd0bb3e-0429-4e10-8adb- | L3 agent | overcloud- | nova | :-) | True | neutron-l3-agent |
| b81233e75ac0 | | controller-1.localdomain | | | | |
| e7524f86-81e4-46e5-ab2c- | Open vSwitch agent | overcloud- | | :-) | True | neutron-openvswitch-agent |
| d6311427369d | | compute-1.localdomain | | | | |
+--------------------------------+--------------------+--------------------------------+-------------------+-------+----------------+---------------------------+
One of the L3 Agent:
[heat-admin#overcloud-controller-0 ~]$ neutron agent-show 901c0300-5081-412d-a7e8-2e77acc098bf
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+---------------------+-------------------------------------------------------------------------------+
| Field | Value |
+---------------------+-------------------------------------------------------------------------------+
| admin_state_up | True |
| agent_type | L3 agent |
| alive | True |
| availability_zone | nova |
| binary | neutron-l3-agent |
| configurations | { |
| | "agent_mode": "legacy", |
| | "gateway_external_network_id": "", |
| | "handle_internal_only_routers": true, |
| | "routers": 1, |
| | "interfaces": 1, |
| | "floating_ips": 1, |
| | "interface_driver": "neutron.agent.linux.interface.OVSInterfaceDriver", |
| | "log_agent_heartbeats": false, |
| | "external_network_bridge": "", |
| | "ex_gw_ports": 1 |
| | } |
| created_at | 2018-02-01 06:54:56 |
| description | |
| heartbeat_timestamp | 2018-02-02 13:25:52 |
| host | overcloud-controller-2.localdomain |
| id | 901c0300-5081-412d-a7e8-2e77acc098bf |
| started_at | 2018-02-02 11:02:27 |
| topic | l3_agent |
+---------------------+-------------------------------------------------------------------------------+
Neutron Router and DHCP Agent.
Neutron Virtual DHCP agent is available is used to ping to the SDN router gateway
[heat-admin#overcloud-controller-0 ~]$ ip netns
qrouter-bb4d96e5-07e1-4ad6-b120-f11c6a2298eb
qdhcp-2cee840e-f683-48ed-a05f-ac993f6cac10
Router Gateway using QDHCP
[heat-admin#overcloud-controller-0 ~]$ sudo ip netns exec qdhcp-2cee840e-f683-48ed-a05f-ac993f6cac10 ping 172.30.10.173
PING 172.30.10.173 (172.30.10.173) 56(84) bytes of data.
64 bytes from 172.30.10.173: icmp_seq=1 ttl=64 time=1.16 ms
64 bytes from 172.30.10.173: icmp_seq=2 ttl=64 time=0.090 ms
64 bytes from 172.30.10.173: icmp_seq=3 ttl=64 time=0.092 ms
^Z
[1]+ Stopped sudo ip netns exec qdhcp-2cee840e-f683-48ed-a05f-ac993f6cac10 ping 172.30.10.173
Floating IP of a Instance using QDHCP
[heat-admin#overcloud-controller-0 ~]$ sudo ip netns exec qdhcp-2cee840e-f683-48ed-a05f-ac993f6cac10 ping 172.30.10.178
PING 172.30.10.178 (172.30.10.178) 56(84) bytes of data.
From 172.30.10.178 icmp_seq=1 Destination Host Unreachable
From 172.30.10.178 icmp_seq=2 Destination Host Unreachable
From 172.30.10.178 icmp_seq=3 Destination Host Unreachable
From 172.30.10.178 icmp_seq=4 Destination Host Unreachable
^C
--- 172.30.10.178 ping statistics ---
6 packets transmitted, 0 received, +4 errors, 100% packet loss, time 5000ms
pipe 4
Router Gateway using QROUTER
[heat-admin#overcloud-controller-0 ~]$ sudo ip netns exec qrouter-bb4d96e5-07e1-4ad6-b120-f11c6a2298eb ping 172.30.10.173
PING 172.30.10.173 (172.30.10.173) 56(84) bytes of data.
64 bytes from 172.30.10.173: icmp_seq=1 ttl=64 time=0.115 ms
64 bytes from 172.30.10.173: icmp_seq=2 ttl=64 time=0.061 ms
64 bytes from 172.30.10.173: icmp_seq=3 ttl=64 time=0.063 ms
64 bytes from 172.30.10.173: icmp_seq=4 ttl=64 time=0.056 ms
^Z
[5]+ Stopped sudo ip netns exec qrouter-bb4d96e5-07e1-4ad6-b120-f11c6a2298eb ping 172.30.10.173
Floating IP of a Instance using QROUTER
[heat-admin#overcloud-controller-0 ~]$ sudo ip netns exec qrouter-bb4d96e5-07e1-4ad6-b120-f11c6a2298eb ping 172.30.10.178
PING 172.30.10.178 (172.30.10.178) 56(84) bytes of data.
From 172.30.10.178 icmp_seq=1 Destination Host Unreachable
From 172.30.10.178 icmp_seq=2 Destination Host Unreachable
From 172.30.10.178 icmp_seq=3 Destination Host Unreachable
From 172.30.10.178 icmp_seq=4 Destination Host Unreachable
^Z
[6]+ Stopped sudo ip netns exec qrouter-bb4d96e5-07e1-4ad6-b120-f11c6a2298eb ping 172.30.10.178
Route of QRouter
[heat-admin#overcloud-controller-0 ~]$ sudo ip netns exec qrouter-bb4d96e5-07e1-4ad6-b120-f11c6a2298eb route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 0 0 0 qg-e8f74c7c-58
30.30.30.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-6a11beee-45
link-local 0.0.0.0 255.255.255.0 U 0 0 0 ha-4ad3b415-1b
169.254.192.0 0.0.0.0 255.255.192.0 U 0 0 0 ha-4ad3b415-1b
172.30.10.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-e8f74c7c-58
IP Route of QRouter
[heat-admin#overcloud-controller-0 ~]$ sudo ip netns exec qrouter-bb4d96e5-07e1-4ad6-b120-f11c6a2298eb ip route
default via 172.30.10.10 dev qg-e8f74c7c-58
30.30.30.0/24 dev qr-6a11beee-45 proto kernel scope link src 30.30.30.254
169.254.0.0/24 dev ha-4ad3b415-1b proto kernel scope link src 169.254.0.1
169.254.192.0/18 dev ha-4ad3b415-1b proto kernel scope link src 169.254.192.3
172.30.10.0/24 dev qg-e8f74c7c-58 proto kernel scope link src 172.30.10.173
Router Gateway IP & Floating IP
Router gateway IP and floating ip is assigned for qg
[heat-admin#overcloud-controller-0 ~]$ sudo ip netns exec qrouter-bb4d96e5-07e1-4ad6-b120-f11c6a2298eb ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: gre0#NONE: <NOARP> mtu 1476 qdisc noop state DOWN qlen 1
link/gre 0.0.0.0 brd 0.0.0.0
3: gretap0#NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
21: ha-4ad3b415-1b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether fa:16:3e:08:33:4b brd ff:ff:ff:ff:ff:ff
inet 169.254.192.3/18 brd 169.254.255.255 scope global ha-4ad3b415-1b
valid_lft forever preferred_lft forever
inet 169.254.0.1/24 scope global ha-4ad3b415-1b
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe08:334b/64 scope link
valid_lft forever preferred_lft forever
22: qg-e8f74c7c-58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether fa:16:3e:90:73:04 brd ff:ff:ff:ff:ff:ff
inet 172.30.10.173/24 scope global qg-e8f74c7c-58
valid_lft forever preferred_lft forever
inet 172.30.10.178/32 scope global qg-e8f74c7c-58
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe90:7304/64 scope link
valid_lft forever preferred_lft forever
23: qr-6a11beee-45: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
link/ether fa:16:3e:cd:08:bf brd ff:ff:ff:ff:ff:ff
inet 30.30.30.254/24 scope global qr-6a11beee-45
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fecd:8bf/64 scope link
valid_lft forever preferred_lft forever
Expected Answer:
We should be able to take the machine floating IP externally.
We are not able to ping the floating IP assigned to the instance.

Why won't Wildfly listen on http port?

What I'm trying to do:
I want to run Wildfly on Ubuntu (as an Oracle VM).
The issue:
I recently found it stopped listening on the http port. The following hopefully demonstrates that Wildfly is running and configured to listen on 8080. However, nothing is listening on this port:
polly#polly-VirtualBox:/opt/wildfly/bin$ ./jboss-cli.sh
Listening for transport dt_socket at address: 8787
You are disconnected at the moment. Type 'connect' to connect to the server or 'help' for the list of supported commands.
[disconnected /] connect
[standalone#localhost:9990 /] [standalone#localhost:9990 /] /socket-binding-group=standard-sockets/socket-binding=http:read-resource
{
"outcome" => "success",
"result" => {
"client-mappings" => undefined,
"fixed-port" => false,
"interface" => undefined,
"multicast-address" => undefined,
"multicast-port" => undefined,
"name" => "http",
"port" => expression "${jboss.http.port:8080}"
}
}
polly#polly-VirtualBox:~$ sudo netstat -pl | grep 8080
polly#polly-VirtualBox:~$ sudo netstat -pl | grep 9990
tcp 0 0 localhost:9990 *:* LISTEN 1106/java
polly#polly-VirtualBox:~$ ps -ef | grep 1106
polly 1106 1036 2 09:18 ? 00:00:05 java -D[Standalone] -server -Xms64m -Xmx512m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Dorg.jboss.boot.log.file=/opt/wildfly/standalone/log/server.log -Dlogging.configuration=file:/opt/wildfly/standalone/configuration/logging.properties -jar /opt/wildfly/jboss-modules.jar -mp /opt/wildfly/modules org.jboss.as.standalone -Djboss.home.dir=/opt/wildfly -Djboss.server.base.dir=/opt/wildfly/standalone -c standalone.xml
polly 2477 2384 0 09:22 pts/0 00:00:00 grep --color=auto 1106
What I have tried:
It seemed pointless to offset the ports because 8080, as far as I can tell, is actually free but I tried it anyway (to no avail):
polly#polly-VirtualBox:/opt/wildfly/bin$ sudo netstat -pl | grep 8180
polly#polly-VirtualBox:/opt/wildfly/bin$ sudo netstat -pl | grep 10090
tcp 0 0 localhost:10090 *:* LISTEN 3165/java
polly#polly-VirtualBox:/opt/wildfly/bin$ curl -X GET http://localhost:10090/console/App.html
<!DOCTYPE html>
<html>
<head>
<title>Management Interface</title>
<meta http-equiv="X-UA-Compatible" content="IE=EDGE" />
<meta http-equiv="content-type" content="text/html;charset=utf-8" />
<script type="text/javascript" language="javascript" src="app/app.nocache.js"></script>
<link rel="shortcut icon" href="/console/images/favicon.ico" />
</head>
<body>
<!-- history iframe required on IE -->
<iframe src="javascript:''" id="__gwt_historyFrame" style="width:0px;height:0px;border:0px"></iframe>
<!-- pre load images-->
<div style="visibility:hidden"><img src="images/loading_lite.gif"/></div>
</body>
</html>
polly#polly-VirtualBox:/opt/wildfly/bin$ curl -X GET http://localhost:8180
curl: (7) Failed to connect to localhost port 8180: Connection refused
I also tried studying the logs (link below) but I can't see any obvious problem.
My question:
Why is Wildfly not listening on the http port?
Any suggestion what I can try next would be highly appreciated.
Logs
https://1drv.ms/f/s!Ao4w10eVqKCjbLJ3OQdiy_WH9Q8
While browsing your standalone.xml I saw you modified it quite much. For example, you removed the entire profile section compared to the default standalone.xml. For the web server to work, you will at least need Undertow.
so at least, you need to add:
<profile>
<subsystem xmlns="urn:jboss:domain:io:1.1">
<worker name="default"/>
<buffer-pool name="default"/>
</subsystem>
<subsystem xmlns="urn:jboss:domain:undertow:3.0">
<buffer-cache name="default"/>
<server name="default-server">
<http-listener name="default" socket-binding="http" redirect-socket="https"/>
<host name="default-host" alias="localhost">
<location name="/" handler="welcome-content"/>
<filter-ref name="server-header"/>
<filter-ref name="x-powered-by-header"/>
</host>
</server>
<servlet-container name="default">
<jsp-config/>
<websockets/>
</servlet-container>
<handlers>
<file name="welcome-content" path="${jboss.home.dir}/welcome-content"/>
</handlers>
<filters>
<response-header name="server-header" header-name="Server" header-value="WildFly/10"/>
<response-header name="x-powered-by-header" header-name="X-Powered-By" header-value="Undertow/1"/>
</filters>
</subsystem>
</profile>

Resources