Why would someone add an OpenDaylight controller in an OpenStack cloud? - openstack

I have seen a lot of people saying that OpenDaylight is the best SDN controller for OpenStack. While I understand the benefits of a software defined network, I can't see how ODL is better than Neutron in an OpenStack cloud.
In both cases, the Open vSwitch are automatically configured (either by Neutron or ODL), and the end user just use the regular OpenStack online interface or the command line to create his networks and VMs.
So, why is the reason people add an SDN controller, especially OpenDaylight, in a cloud like OpenStack ?
Thank in advance.

First, let me clarify that Neutron is only an API layer and always needs a backend (configured as a Neutron plug-in) service in order to implement the actual networking for the OpenStack cloud. In many cases, the Open vSwitch (OVS) plug-in is configured out of the box, and people are mixing Neutron and the actual OVS-based Neutron implementation.
To answer your question: OpenStack and Neutron are all about choice. If the OVS solution for Neutron is good enough for you, then great -- you don't need an "SDN" nor OpenDaylight in the mix. But some find this solution not good enough for them, typically because of missing functionality like controlling both the virtual and physical network from one place, bridging between Neutron overlay networks (typically VXLAN VNIs) and existing networks in the data center (VLAN, IP/MPLS, etc.), connecting OpenStack projects with other (non-OpenStack) infrastructure (e.g VMware, public cloud), and so on. This is where the OVS solution is being replaced with another "SDN".
Around Icehouse time-frame, the concept of Modular Layer 2 (ML2) was introduced. Many SDN solutions plug-in into Neutron via this ML2 interface and a mechanism driver.
It should be noted that ML2 is focused on L2 Neutron resources. In order to implement L3 resources (like routing, NAT) there is a need for an L3 service plugin. Similarly, there are separate driver interfaces for L4-L7 resources such as LBaaS, VPNaaS, FWaaS, BGP/VPN and so on. So depending on the SDN solution and its capabilities you may see a combination of mechanism driver, an L3 service plug-in, and L4-L7 drivers. As an example, the OpenDaylight plug-in for Neutron (aka networking-odl) includes an ML2 mechanism driver, but also a bunch of other drivers for L3-L7 services, see https://github.com/openstack/networking-odl.

Related

How to enable and leverage OVS-DPDK in Openstack

I have successfully deployed openstack packstack (all-in-one) in single VM. I'm running Centos7 VM. Everything work well and able to instantiate VM over native OVS networking.
I plan to have OVS-DPDK enable in my openstack. I have gone through some documentation but still not clear how to enable it. I understand Openstack Queens OVS is already DPDK support.I have seen people asking the same but no answer and I would like again to request on how I can have DPDK support to be enabled over my running openstack. Few method to change neutron configuration file or by deploying SDN controller. Hope no need to redeploy..anyway further advise would be much appreciated.
This is my current ovs version:-
ovs-vswitchd --version
ovs-vswitchd (Open vSwitch) 2.11.0
DPDK 18.11.0
I really appreciated your help and support to my question.
Please assist. Thank you
Openstack and OVS are 2 different processes, which communicates via openflow rules. That is using neutron plugin which is used in OpenStack needs to configured for the ports in use. In OVS one needs to start with controller IP openstack neutron controller. Hence changes are listed as
use neutron or similar plugin for network configuration.
updated the neutron plugin config file for desired ports for OVS.
OVS vswitchd is started with IP address of controller.
Note: OVS binary needs to be build with DPDK libraries for having DPDK support. Do not expect by installing DPDK on the distro the OVS binary will become DPDK supported.

Use more than one physical interface on devstack ocata

I have a barebones PC with 6 physical interfaces to 'play' around with devstack. I'd like to deploy VMs on it and connect them to different Ethernet interfaces. Currently, I'm only able to use the interface associated to the br-ex bridge. I've tried to define several OVS bridges and assign the physical interfaces to different bridges. I try to define a mapping with more than one bridge, but that doesn't seem to work.
Has anyone had any success on this.
Thanks, /PA
You will need to create "provider networks", which allow you to associate neutron networks with specific physical interfaces. You can find documentation on creating provider networks in the install guide.
Once everything is configured correctly, you can attach new Nova servers to your provider networks by passing the appropriate --network argument when you boot a new server.

How to integrate Opendaylight with OpenStack? How does Opendaylight operate when integrated with OpenStack?

I was trying to integrate ODL with Neutron by enable odl service in the local.conf file, but it wasn't able to exit the ./stack.sh script properly.
How can I integrate both?
And how does Neutron work with and without ODL.
Neutron provides "networking as a service" between interface devices managed by other Openstack services, but it's just an API which parse messages to REAL messages in the core of the network and Opendaylight really deals with the core of the network itself.
While OpenDaylight Controller provides several ways to integrate with OpenStack, you might use VTN features available on OpenDaylight controller. In the integration, VTN Manager work as network service provider for OpenStack. VTN Manager features, enable OpenStack to work in pure OpenFlow environment in which all switches in data plane are OpenFlow switch.
So you can follow this tutorial: https://wiki.opendaylight.org/view/Release/Helium/VTN/User_Guide/OpenStack_Support#How_to_set_up_OpenStack_for_the_integration_with_VTN_Manager

Openflow and nginx webserver

I am a newbie to openstack. I understand that neutron can be used to deploy openflow compatible network L2-L3 devices e.g ovs deployed on the fly. Can this be extended to deploying say L7 devices e.g. webservers like nginx. ? Googling doesnt yield any tangible answers. Inputs appreciated
Neutron has been designed to cater to openstack's l2 and l3 needs alone. If you are looking for deploying other services you should take a look at the heat project.

Configuring openstack for a in-house test cloud

We're currently looking to migrate an old and buggy eucalyptus cloud to openstack. We have ~15 machines that are all on the same office-internal network. The instances get their network configuration from an external (not eucalyptus) DHCP server. We run both linux and windows images. The cloud is used exclusively for platform testing from Jenkins.
Looking into openstack, it seems that out of the three supported networking modes, none really fit our environment. What we are looking for is something like an "unmanaged mode" where openstack launches an instance that is hooked up to eth0 interface on the instances' compute node and which will receive its network configuration from the external DHCP on boot. I.e. the VM's, guest hosts and clients (jenkins) are all on the same network, managed by an external DHCP server.
Is a scenario like this possible to set up in OpenStack?
It's not commonly used, but the Networking setup that will fit your needs the best is FlatNetworking (not FlatDHCPNetworking). There isn't stellar documentation on configuring that setup to work through your environment, and some pieces (like the nova-metadata service) may be a bit tricky to manage with it, but that should accomplish allowing you to run an OpenStack cloud with an external DHCP provider.
I wrote up the wiki page http://wiki.openstack.org/UnderstandingFlatNetworking some time ago to explain the setup of the various networks and how they operate with regards to NICs on hosting systems. FlatNetworking is effectively the same as FlatDHCPNetworking except that OpenStack doesn't try and run the DHCP service for you.
Note that with this mode, all the VM instances will be on the same network with your OpenStack infrastructure - there's no separation of networks at all.

Resources