I have 3 network nodes running neutron-server ..
Only one of these nodes is attached to the external network
I use ml2 with openvswitch
in the bridge mapping of the node connected to the external network - VIA FLOATING IPS - , i have external_net mapped to the correct bridge ..
On the other nodes i do not have this mapping defined and i do not have interfaces
The issue i have is the following
When i try to start a virtual machine that is connected to the external network , i have this error in the logs :
neutron-server: 2016-09-07 12:33:00.975 57352 ERROR neutron.plugins.ml2.managers [req-def18170-5e45-4fef-9653-e008faa39913 -
- - - -] Failed to bind port 035a58e1-f18f-428b-b78e-e8c0aaba7d14 on host node002 for vnic_type normal using segments [{'segmentation_id': None, 'phy
sical_network': u'external_net', 'id': u'0d4590e5-0c48-4316-8b78-1636d3f44d43', 'network_type': u'flat'}]
neutron-server: 2016-09-07 12:33:00.975 57352 ERROR neutron.plugins.ml2.managers [req-def18170-5e45-4fef-9653-e008faa39913 -
- - - -] Failed to bind port 035a58e1-f18f-428b-b78e-e8c0aaba7d14 on host node003 for vnic_type normal using segments [{'segmentation_id': None, 'phy
sical_network': u'external_net', 'id': u'0d4590e5-0c48-4316-8b78-1636d3f44d43', 'network_type': u'flat'}]
on both nodes( node002 and node003 ) , because they DO NOT have this network defined ! so is this a bug or such a setup is not valid ?
Thank you
In a typical OpenStack deployment you do not bind Nova instances directly to the external network. As you have already surmised, this won't work because that network isn't provisioned on the compute hosts.
Instead, you attach your instances to an internal network, and then you assign floating ip addresses from the external network using,e.g., nova floating-ip-create and nova floating-ip-associate.
An alternative solution is to use "provider external networks", an arrangement in which your nova instances are attached directly to L2 networks with external connectivity, rather than relying on the floating-ip NAT solution described in the previous paragraphs.
the reason behind the error was bad configuration on nodes that DOES NOT host a provider network
mainly the ml2 core file ml2_conf.ini
parameter :
flat_network should be set to the appropriate value on each node
like on the node which is connected to all flat networks ( including the internal network ) it should be set to
flat_networks = *
and on the node that does not host all flat networks ( the provider network for instance )
flat_networks = physical_internal
I believe it won't work. You need to have binded ports to all your 3 network nodes.
A quick test would be to stop neutron-server, neutron-dhcp-agent, neutron-l3-agent and neutron-metadata-agent services from the 2 network nodes that are not bined to external ports... and test again.
Related
I want to manage Ovirt VM's network by openstack neutron(OVN).
My setup:
oVirt v4.4.10 (created one VM names: vm1)
Openstack victoria (5.4.0)
My steps:
created network 'vlan-1' at openstack dashboard network page.
created provider 'neutron' and select openstack networking type
import network 'vlan-1' from 'neutron' provider.
create nic1 in vm1 and selected network 'vlan-1'
run vm1.
But vm1 start failed, The engine.log shows:
2022-11-02 12:05:17,674+08 WARN [org.ovirt.engine.core.bll.provider.network.openstack.BaseNetworkProviderProxy] (EE-ManagedThreadFactory-engine-Thread-11256) [a0d16d0c-f5df-4992-808f-0095daff628a]
Host binding id for external network vlan-1 on host ovn-comp-nodes is null, using host id 192.168.3.215 to allocate vNIC nic1 instead. Please provide an after_get_caps hook for the plugin type OPEN_VSWITCH on host ovn-comp-nodes
2022-11-02 12:05:23,180+08 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-21) [6f97b9be] EVENT_ID: VM_DOWN_ERROR(119), VM ovn-comp-node1 is down with error. Exit message:
Cannot get interface MTU on 'vlan-1': No such device.
I am reading Ovirt's source code but still can't found the reason...
Creation instance failed with error:
Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance
Controller node has not any errors.
There is exception on compute node:
nova.exception.PortBindingFailed: Binding failed for port b70c2f30-f83c-4cae-abf8-98be39a382d5, please check neutron logs for more information.
Neutron's log no errors.
neutron config
[linux_bridge]
physical_interface_mappings = provider:ens3
[vxlan]
enable_vxlan = true
local_ip = 10.101.1.46
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
How can I fix port binding?
1, check the port create successful by openstack port show b70c2f30-f83c-4cae-abf8-98be39a382d5, I guess it's good because of the nova execute PortBinding.
2, check whether the all network component works by openstack network agent list, and check the not works component's log.
3, make sure the all hypervisor are at consistent time.
did this port created first then try to attach as interface ? if yes pleas check port owner ( project) .
Port type you created is normal or direct ?
I installed Openstack using Devstack on a VirtualBox VM running Ubuntu 18.04. I am trying to create a provider network with the following command:
neutron net-create mgmt --provider:network_type=vlan --provider:physical_network=physnet_em1 --provider:segmentation_id=500 --shared
This command returns the following error:
neutronclient.common.exceptions.BadRequest: Invalid input for operation:
physical_network 'physnet_em1' unknown for VLAN provider network.
Neutron server returns request_ids: ['req-7a0bfe13-b4c3-4408-bc60-8d36e8bc3f9a']
I would like to know how to proceed.
You should use the openstack-client commands like openstack network create ..., because the client-commands of the single libraries, like your neutron net-create, are deprecated. There are some really special cases, which are only possible with the client-library of the single components, but the most is covered by the openstack-client. Unfortunately there are often used the old commands in documentations, because many documents are not up-to-date.
To avoid the error you had, you only need to remove the --provider:physical_network=physnet_em1 and --provider:segmentation_id=500 from your command. The physical network and vlan-range should be defined within the ml2_conf.ini of the neutron-server, like this for example:
[ml2]
type_drivers = flat,vlan,vxlan
...
[ml2_type_vlan]
network_vlan_ranges = physnet_em1:171:280
...
So with neutron net-create mgmt --provider:network_type=vlan --shared it works in my test-deployment (at least there in no error in the terminal, not tested the network-connectioin now). The openstack-command for this task would be openstack network create --provider-network-type vlan mgmt --share --external.
Normally, as far as I know, for the provider network a flat network-type is used instead of vlan, because the provider-network should normally not directly connected to any VM. The other non-provider networks can be vlan or vxlan and then connected with a neutron-router to the provider-network. An openstack-command for this could be: openstack network create --provider-network-type flat --provider-physical-network physnet_em1 mgmt --share --external. For flat-networks you have the possibility to define a provider-physical-network via command-line.
In some documentations like this: https://docs.openstack.org/newton/install-guide-ubuntu/launch-instance-networks-provider.html they also use a flat-network as provider-network-type.
I want to install neutron server on different Nodes. In my environment there will be 3 provider networks name provider1, provider2 and provider3 with respectively. All of them will be flat network. In my system, I want each neutron server manages different provider networks (neutron1 only controls provider1, neutron2 controls provider2 and neutron3 controls provider3). VMs will have internal networks (overlay network) and use Virtual Routers to access provider networks. The interface mapping on neutron servers are as given below:
Neutron 1
Bond 0 : Management + overlay
Bond 1 : use for provider1
Neutron 2
Bond 0 : Management + overlay
Bond 1 : use for provider2
Neutron 3
Bond 0 : Management + overlay
Bond 1 : use for provider3
Virtual router(VR) is randomly scheduled across multiple OpenStack Networking nodes. My question is how I can deploy VR on specific neutron node (like VR which has GW address from provider1 will deploy on neutron1) ? or I will create high available VR, in this case VR will deploy all neutron servers. How can I select the active virtual router in this case?
I thought the DVR(Distributed Virtual Router) is helpful for your case.
I describe some differences between DVR and non-DVR based on VM access routes.
The DVR is generated Virtual Router at each compute node that has VMs to decrease overloads of Network node and SPOF.
Differences based on how to route.
VMs running node | subnet | using router at DVR | non-DVR
---------------------------------------------------------------------------------------------------------------------------------
all on the same node | different | Routing from each VM running compute node | Specified Network node (running L3agent node)
all across multiple nodes | different | Routing from each VM running compute node | Specified Network node (running L3agent node)
Difference when using Floating IPs. (but accessing from external to internal (SNAT) is not HA, just one node can routing it as of Ocata.)
DVR | non-DVR
-------------------------------------------------------
each DVR has each Floating IP | Just Network node only
As following configuration steps were based just a simple pattern, you need to refer the official tutorials for adopting your system.
Prerequisite: all compute nodes have installed l3, dhcp, metadata, openvswitch agents.
Enable the DVR at all compute nodes.
# vim /etc/neutron/neutron.conf
[DEFAULT]
...snip...
router_distributed = True
...snip...
Adding the l2population driver at controller node.
# /vim/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
...snip...
mechanism_drivers = openvswitch,l2population
...snip...
Configure the SNAT router on the specified compute node.
# vim /etc/neutron/l3_agent.ini
[DEFAULT]
...snip...
agent_mode = dvr_snat
...snip...
Configure the agent mode to DVR on the remaining compute nodes.
# vim /etc/neutron/l3_agent.ini
[DEFAULT]
...snip...
agent_mode = dvr
...snip..
Edit openvswitch config on all compute nodes.
# vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
[agent]
...snip...
l2_population = True
enable_distributed_routing = True
...snip...
Restart for chages to take effect.
On controller node.
# systemctl restart neutron-server
On all compute nodes.
# systemctl restart neutron-l3-agent neutron-openvswitch-agent
I hope this will help you.
I am trying to get non SRIOV pci-passthrough using OpenStack Liberty, but not successful.
These are the steps followed
create pci_passthrough_whitelist in nova.conf of the compute node as pci_passthrough_whitelist = {"address": "0000:89:00.0", "physical_network": "test_phy_nw"}
As sriov is not used, do not add sriovnicswitch as mechanism driver
in ml2. and do not do any ml2 sriov configurations. do not configure pci_passthrough_alias as alias does not support BDF (address)
create a neutron net - neutron net-create --name test_os_nw
--provider:physical_network test_phy_nw --provider:physical_network_type flat. (is Flat ok ? or should i use vlan or vxlan type networks ?)
create port with direct vnic_type - neutron port-create
--name pci.port --binding:vnic_type direct
boot an instance with this port nova boot --flavor m1.small --image
ubuntu --nic port-id=$(neutron port-show pci.port -F id -f value)
test.vm
Two questions in this regard
Are the steps mentioned above correct & am i missing anything in the
above steps ?
Is the process to achieve pci-passthrough (non SRIOV) different from
SRIOV pci-passthrough ? If it is different, can you please share a
link to it (or better can u give a quick summary of the process).
After some more experimenting and reading, figured out BDF based pass through is supported only for SRIOV (as of Liberty).