Multiple provider network management on different neutron nodes - openstack

I want to install neutron server on different Nodes. In my environment there will be 3 provider networks name provider1, provider2 and provider3 with respectively. All of them will be flat network. In my system, I want each neutron server manages different provider networks (neutron1 only controls provider1, neutron2 controls provider2 and neutron3 controls provider3). VMs will have internal networks (overlay network) and use Virtual Routers to access provider networks. The interface mapping on neutron servers are as given below:
Neutron 1
Bond 0 : Management + overlay
Bond 1 : use for provider1
Neutron 2
Bond 0 : Management + overlay
Bond 1 : use for provider2
Neutron 3
Bond 0 : Management + overlay
Bond 1 : use for provider3
Virtual router(VR) is randomly scheduled across multiple OpenStack Networking nodes. My question is how I can deploy VR on specific neutron node (like VR which has GW address from provider1 will deploy on neutron1) ? or I will create high available VR, in this case VR will deploy all neutron servers. How can I select the active virtual router in this case?

I thought the DVR(Distributed Virtual Router) is helpful for your case.
I describe some differences between DVR and non-DVR based on VM access routes.
The DVR is generated Virtual Router at each compute node that has VMs to decrease overloads of Network node and SPOF.
Differences based on how to route.
VMs running node | subnet | using router at DVR | non-DVR
---------------------------------------------------------------------------------------------------------------------------------
all on the same node | different | Routing from each VM running compute node | Specified Network node (running L3agent node)
all across multiple nodes | different | Routing from each VM running compute node | Specified Network node (running L3agent node)
Difference when using Floating IPs. (but accessing from external to internal (SNAT) is not HA, just one node can routing it as of Ocata.)
DVR | non-DVR
-------------------------------------------------------
each DVR has each Floating IP | Just Network node only
As following configuration steps were based just a simple pattern, you need to refer the official tutorials for adopting your system.
Prerequisite: all compute nodes have installed l3, dhcp, metadata, openvswitch agents.
Enable the DVR at all compute nodes.
# vim /etc/neutron/neutron.conf
[DEFAULT]
...snip...
router_distributed = True
...snip...
Adding the l2population driver at controller node.
# /vim/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
...snip...
mechanism_drivers = openvswitch,l2population
...snip...
Configure the SNAT router on the specified compute node.
# vim /etc/neutron/l3_agent.ini
[DEFAULT]
...snip...
agent_mode = dvr_snat
...snip...
Configure the agent mode to DVR on the remaining compute nodes.
# vim /etc/neutron/l3_agent.ini
[DEFAULT]
...snip...
agent_mode = dvr
...snip..
Edit openvswitch config on all compute nodes.
# vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
[agent]
...snip...
l2_population = True
enable_distributed_routing = True
...snip...
Restart for chages to take effect.
On controller node.
# systemctl restart neutron-server
On all compute nodes.
# systemctl restart neutron-l3-agent neutron-openvswitch-agent
I hope this will help you.

Related

Install Kubernetes + Cilium on different networks

There is the following topology:
'left-1', 'left-2', 'right-1', 'right-2', 'center' - hosts (DNS names are same).
"Clouds" - networks.
kubeadm, kubectl, kubelet, docker on all hosts installed correctly.
Kubernetes need install like: 'Master-1' on host 'left-1', 'Master-2' on host 'right-1', and workers on hosts 'left-2' and 'right-2'
All hosts ping each other by the domain name. All ports on all hosts are open. No firewall anywhere.
All hosts have access to the internet.
Here there is a manual to install Kubernetes:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
If I install Kubernetes only on 'left-1' and 'left-2' - all works fine.
If I install Kubernetes only on 'right-1' and 'right-2' - all works fine.
But if I install on all nodes - pods from the left do not connect to pods from right, and right pods do not connect to left pods.
How to install Kubernetes on the left and right nodes together?
I use a Cilium network.
I installed a Cilium network with the command:
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.6.8/install/kubernetes/quick-install.yaml
When i init the first master node, i describe CIDR: 10.217.0.0/16
I tried to install etcd separately from kubernetes. i've got error:
2020-06-25 02:49:37.073290 I | embed: rejected connection from "10.7.0.1:48422" (error "tls: \"10.7.0.1\" does not match any of DNSNames [\"right-1\" \"localhost\"]", ServerName "", IPAddresses ["10.8.1.1" "127.0.0.1" "::1" "10.8.1.1"], DNSNames ["right-1" "localhost"])
10.7.0.1 - it is center, and center is not a part of the etcd cluster. Why etcd checks it?
[left-1]$ traceroute right-1
traceroute to right-1 (10.8.1.1), 30 hops max, 60 byte packets
1 center (10.7.0.1) 1.381 ms 1.252 ms 1.159 ms
2 right-1 (10.8.1.1) 1.068 ms 0.990 ms 0.912 ms
We solved the problem.
Cluster must be created by command:
kubeadm init --config=kubeadm-config.yaml --upload-certs
Where kubeadm-config.yaml contains:
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: 1.18.3
controlPlaneEndpoint: "10.7.1.1:6443"
networking:
podSubnet: "10.217.0.0/16"
etcd:
local:
serverCertSANs: ["10.7.1.1", "10.7.2.2", "10.7.0.1", "10.8.1.1", "10.8.2.2", "10.8.0.1"]
peerCertSANs: ["10.7.1.1", "10.7.2.2", "10.7.0.1", "10.8.1.1", "10.8.2.2", "10.8.0.1"]
Pay attention to yaml-parameters: serverCertSANs, and peerCertSANs : its contain 10.7.0.1 and 10.8.0.1 - these IPs come to node in network packages as client IP and must be registered as trusted IP. If you have another IPs in inter-node interaction, it must be registered too.

Neutron - Invalid input for operation: physical_network 'physnet_em1' unknown for VLAN provider network

I installed Openstack using Devstack on a VirtualBox VM running Ubuntu 18.04. I am trying to create a provider network with the following command:
neutron net-create mgmt --provider:network_type=vlan --provider:physical_network=physnet_em1 --provider:segmentation_id=500 --shared
This command returns the following error:
neutronclient.common.exceptions.BadRequest: Invalid input for operation:
physical_network 'physnet_em1' unknown for VLAN provider network.
Neutron server returns request_ids: ['req-7a0bfe13-b4c3-4408-bc60-8d36e8bc3f9a']
I would like to know how to proceed.
You should use the openstack-client commands like openstack network create ..., because the client-commands of the single libraries, like your neutron net-create, are deprecated. There are some really special cases, which are only possible with the client-library of the single components, but the most is covered by the openstack-client. Unfortunately there are often used the old commands in documentations, because many documents are not up-to-date.
To avoid the error you had, you only need to remove the --provider:physical_network=physnet_em1 and --provider:segmentation_id=500 from your command. The physical network and vlan-range should be defined within the ml2_conf.ini of the neutron-server, like this for example:
[ml2]
type_drivers = flat,vlan,vxlan
...
[ml2_type_vlan]
network_vlan_ranges = physnet_em1:171:280
...
So with neutron net-create mgmt --provider:network_type=vlan --shared it works in my test-deployment (at least there in no error in the terminal, not tested the network-connectioin now). The openstack-command for this task would be openstack network create --provider-network-type vlan mgmt --share --external.
Normally, as far as I know, for the provider network a flat network-type is used instead of vlan, because the provider-network should normally not directly connected to any VM. The other non-provider networks can be vlan or vxlan and then connected with a neutron-router to the provider-network. An openstack-command for this could be: openstack network create --provider-network-type flat --provider-physical-network physnet_em1 mgmt --share --external. For flat-networks you have the possibility to define a provider-physical-network via command-line.
In some documentations like this: https://docs.openstack.org/newton/install-guide-ubuntu/launch-instance-networks-provider.html they also use a flat-network as provider-network-type.

Openstack All-In-One Single Machine Networking

I'm having a hard time configuring an Openstack environment based on the All-In-One Single Machine installer for bridged networking in my LAN.
My objective is to SSH into the instances created in Openstack from my LAN.
The server is an Ubuntu 16.04 LTS with minimal installation and OpenSSH. The network configuration of the server is:
auto enp3s0
iface enp3s0 inet static
address 10.4.4.1
netmask 255.255.255.0
gateway 10.4.4.254
broadcast 10.4.4.255
network 10.4.4.0
dns-nameservers 10.4.1.12 10.4.1.10
Basically my network details are the following:
LAN 10.4.4.0
MASK 255.255.255.0
Gateway/DHCP Server 10.4.4.254
The local.conf file I've used for deploying the devstack is the following:
# Sample ``local.conf`` for user-configurable variables in ``stack.sh``
# NOTE: Copy this file to the root DevStack directory for it to work properly.
# ``local.conf`` is a user-maintained settings file that is sourced from ``stackrc``.
# This gives it the ability to override any variables set in ``stackrc``.
# Also, most of the settings in ``stack.sh`` are written to only be set if no
# value has already been set; this lets ``local.conf`` effectively override the
# default values.
# This is a collection of some of the settings we have found to be useful
# in our DevStack development environments. Additional settings are described
# in https://docs.openstack.org/devstack/latest/configuration.html#local-conf
# These should be considered as samples and are unsupported DevStack code.
# The ``localrc`` section replaces the old ``localrc`` configuration file.
# Note that if ``localrc`` is present it will be used in favor of this section.
[[local|localrc]]
# Minimal Contents
# ----------------
# While ``stack.sh`` is happy to run without ``localrc``, devlife is better when
# there are a few minimal variables set:
# If the ``*_PASSWORD`` variables are not set here you will be prompted to enter
# values for them by ``stack.sh``and they will be added to ``local.conf``.
FLOATING_RANGE=10.4.4.192/27
FIXED_RANGE=192.168.0.0/24
FIXED_NETWORK_SIZE=256
FLAT_INTERFACE=enp3s0
ADMIN_PASSWORD=nomoresecret
DATABASE_PASSWORD=stackdb
RABBIT_PASSWORD=stackqueue
SERVICE_PASSWORD=$ADMIN_PASSWORD
# ``HOST_IP`` and ``HOST_IPV6`` should be set manually for best results if
# the NIC configuration of the host is unusual, i.e. ``eth1`` has the default
# route but ``eth0`` is the public interface. They are auto-detected in
# ``stack.sh`` but often is indeterminate on later runs due to the IP moving
# from an Ethernet interface to a bridge on the host. Setting it here also
# makes it available for ``openrc`` to include when setting ``OS_AUTH_URL``.
# Neither is set by default.
HOST_IP=10.4.4.1
#HOST_IPV6=2001:db8::7
# Logging
# -------
# By default ``stack.sh`` output only goes to the terminal where it runs. It can
# be configured to additionally log to a file by setting ``LOGFILE`` to the full
# path of the destination log file. A timestamp will be appended to the given name.
LOGFILE=$DEST/logs/stack.sh.log
# Old log files are automatically removed after 7 days to keep things neat. Change
# the number of days by setting ``LOGDAYS``.
LOGDAYS=2
# Nova logs will be colorized if ``SYSLOG`` is not set; turn this off by setting
# ``LOG_COLOR`` false.
#LOG_COLOR=False
# Using milestone-proposed branches
# ---------------------------------
# Uncomment these to grab the milestone-proposed branches from the
# repos:
#CINDER_BRANCH=milestone-proposed
#GLANCE_BRANCH=milestone-proposed
#HORIZON_BRANCH=milestone-proposed
#KEYSTONE_BRANCH=milestone-proposed
#KEYSTONECLIENT_BRANCH=milestone-proposed
#NOVA_BRANCH=milestone-proposed
#NOVACLIENT_BRANCH=milestone-proposed
#NEUTRON_BRANCH=milestone-proposed
#SWIFT_BRANCH=milestone-proposed
# Using git versions of clients
# -----------------------------
# By default clients are installed from pip. See LIBS_FROM_GIT in
# stackrc for details on getting clients from specific branches or
# revisions. e.g.
# LIBS_FROM_GIT="python-ironicclient"
# IRONICCLIENT_BRANCH=refs/changes/44/2.../1
# Swift
# -----
# Swift is now used as the back-end for the S3-like object store. Setting the
# hash value is required and you will be prompted for it if Swift is enabled
# so just set it to something already:
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
# For development purposes the default of 3 replicas is usually not required.
# Set this to 1 to save some resources:
SWIFT_REPLICAS=1
# The data for Swift is stored by default in (``$DEST/data/swift``),
# or (``$DATA_DIR/swift``) if ``DATA_DIR`` has been set, and can be
# moved by setting ``SWIFT_DATA_DIR``. The directory will be created
# if it does not exist.
SWIFT_DATA_DIR=$DEST/data
At the end of the deployment I'm able to ping from the instance to my LAN and do nslookup on google.com for example, but I can't do it backwards, ping/ssh/telnet the instance in Openstack.
The security group permits all traffic, all ICMP ingress/egress, SSH from everywhere.
I've tried to telnet on my local computer from the Openstack instance and it's showing the IP of the Openstack host, not the host. So I'm missing something in the network topology.
netstat -ant | grep 1716
tcp6 0 0 :::1716 :::* LISTEN
tcp6 0 0 10.4.3.34:1716 10.4.4.1:42992 ESTABLISHED
Is there any type of network deployment I'm missing?
Any advice would be much appreciated!
If you are trying to access your instances from the "outside", you will need to create a floating IP pool and assign a floating IP to one of your instances.

multiple neutron nodes with only one node attached to external network

I have 3 network nodes running neutron-server ..
Only one of these nodes is attached to the external network
I use ml2 with openvswitch
in the bridge mapping of the node connected to the external network - VIA FLOATING IPS - , i have external_net mapped to the correct bridge ..
On the other nodes i do not have this mapping defined and i do not have interfaces
The issue i have is the following
When i try to start a virtual machine that is connected to the external network , i have this error in the logs :
neutron-server: 2016-09-07 12:33:00.975 57352 ERROR neutron.plugins.ml2.managers [req-def18170-5e45-4fef-9653-e008faa39913 -
- - - -] Failed to bind port 035a58e1-f18f-428b-b78e-e8c0aaba7d14 on host node002 for vnic_type normal using segments [{'segmentation_id': None, 'phy
sical_network': u'external_net', 'id': u'0d4590e5-0c48-4316-8b78-1636d3f44d43', 'network_type': u'flat'}]
neutron-server: 2016-09-07 12:33:00.975 57352 ERROR neutron.plugins.ml2.managers [req-def18170-5e45-4fef-9653-e008faa39913 -
- - - -] Failed to bind port 035a58e1-f18f-428b-b78e-e8c0aaba7d14 on host node003 for vnic_type normal using segments [{'segmentation_id': None, 'phy
sical_network': u'external_net', 'id': u'0d4590e5-0c48-4316-8b78-1636d3f44d43', 'network_type': u'flat'}]
on both nodes( node002 and node003 ) , because they DO NOT have this network defined ! so is this a bug or such a setup is not valid ?
Thank you
In a typical OpenStack deployment you do not bind Nova instances directly to the external network. As you have already surmised, this won't work because that network isn't provisioned on the compute hosts.
Instead, you attach your instances to an internal network, and then you assign floating ip addresses from the external network using,e.g., nova floating-ip-create and nova floating-ip-associate.
An alternative solution is to use "provider external networks", an arrangement in which your nova instances are attached directly to L2 networks with external connectivity, rather than relying on the floating-ip NAT solution described in the previous paragraphs.
the reason behind the error was bad configuration on nodes that DOES NOT host a provider network
mainly the ml2 core file ml2_conf.ini
parameter :
flat_network should be set to the appropriate value on each node
like on the node which is connected to all flat networks ( including the internal network ) it should be set to
flat_networks = *
and on the node that does not host all flat networks ( the provider network for instance )
flat_networks = physical_internal
I believe it won't work. You need to have binded ports to all your 3 network nodes.
A quick test would be to stop neutron-server, neutron-dhcp-agent, neutron-l3-agent and neutron-metadata-agent services from the 2 network nodes that are not bined to external ports... and test again.

bdf based pci-passthrough (non SRIOV) using OpenStack Liberty

I am trying to get non SRIOV pci-passthrough using OpenStack Liberty, but not successful.
These are the steps followed
create pci_passthrough_whitelist in nova.conf of the compute node as pci_passthrough_whitelist = {"address": "0000:89:00.0", "physical_network": "test_phy_nw"}
As sriov is not used, do not add sriovnicswitch as mechanism driver
in ml2. and do not do any ml2 sriov configurations. do not configure pci_passthrough_alias as alias does not support BDF (address)
create a neutron net - neutron net-create --name test_os_nw
--provider:physical_network test_phy_nw --provider:physical_network_type flat. (is Flat ok ? or should i use vlan or vxlan type networks ?)
create port with direct vnic_type - neutron port-create
--name pci.port --binding:vnic_type direct
boot an instance with this port nova boot --flavor m1.small --image
ubuntu --nic port-id=$(neutron port-show pci.port -F id -f value)
test.vm
Two questions in this regard
Are the steps mentioned above correct & am i missing anything in the
above steps ?
Is the process to achieve pci-passthrough (non SRIOV) different from
SRIOV pci-passthrough ? If it is different, can you please share a
link to it (or better can u give a quick summary of the process).
After some more experimenting and reading, figured out BDF based pass through is supported only for SRIOV (as of Liberty).

Resources