Openstack Opendaylight L3 issue - openstack

I am having an issue with ODL + OpenStack + OVS L3 routing. I had a previous installation of OVS + OpenStack operational with one control node and one compute/network node.
After installing Open daylight and following the instructions on the Opendaylight site for Netvirt integration, I have L2 working on the VXLAN, but any router that I deploy has all interfaces down.
I cleared the OVS config and allowed ODL to create the BR-INT bridge, and then I added a mapping for :. Is there anything else I need to as I understand that you do not need to create BR-EX if ODL is being used.
Within the neutron.conf I have specified odl-router.
Let me know if you want to see any configurations I am at a bit of a loss here.
Cheers

Related

How to enable and leverage OVS-DPDK in Openstack

I have successfully deployed openstack packstack (all-in-one) in single VM. I'm running Centos7 VM. Everything work well and able to instantiate VM over native OVS networking.
I plan to have OVS-DPDK enable in my openstack. I have gone through some documentation but still not clear how to enable it. I understand Openstack Queens OVS is already DPDK support.I have seen people asking the same but no answer and I would like again to request on how I can have DPDK support to be enabled over my running openstack. Few method to change neutron configuration file or by deploying SDN controller. Hope no need to redeploy..anyway further advise would be much appreciated.
This is my current ovs version:-
ovs-vswitchd --version
ovs-vswitchd (Open vSwitch) 2.11.0
DPDK 18.11.0
I really appreciated your help and support to my question.
Please assist. Thank you
Openstack and OVS are 2 different processes, which communicates via openflow rules. That is using neutron plugin which is used in OpenStack needs to configured for the ports in use. In OVS one needs to start with controller IP openstack neutron controller. Hence changes are listed as
use neutron or similar plugin for network configuration.
updated the neutron plugin config file for desired ports for OVS.
OVS vswitchd is started with IP address of controller.
Note: OVS binary needs to be build with DPDK libraries for having DPDK support. Do not expect by installing DPDK on the distro the OVS binary will become DPDK supported.

Why would someone add an OpenDaylight controller in an OpenStack cloud?

I have seen a lot of people saying that OpenDaylight is the best SDN controller for OpenStack. While I understand the benefits of a software defined network, I can't see how ODL is better than Neutron in an OpenStack cloud.
In both cases, the Open vSwitch are automatically configured (either by Neutron or ODL), and the end user just use the regular OpenStack online interface or the command line to create his networks and VMs.
So, why is the reason people add an SDN controller, especially OpenDaylight, in a cloud like OpenStack ?
Thank in advance.
First, let me clarify that Neutron is only an API layer and always needs a backend (configured as a Neutron plug-in) service in order to implement the actual networking for the OpenStack cloud. In many cases, the Open vSwitch (OVS) plug-in is configured out of the box, and people are mixing Neutron and the actual OVS-based Neutron implementation.
To answer your question: OpenStack and Neutron are all about choice. If the OVS solution for Neutron is good enough for you, then great -- you don't need an "SDN" nor OpenDaylight in the mix. But some find this solution not good enough for them, typically because of missing functionality like controlling both the virtual and physical network from one place, bridging between Neutron overlay networks (typically VXLAN VNIs) and existing networks in the data center (VLAN, IP/MPLS, etc.), connecting OpenStack projects with other (non-OpenStack) infrastructure (e.g VMware, public cloud), and so on. This is where the OVS solution is being replaced with another "SDN".
Around Icehouse time-frame, the concept of Modular Layer 2 (ML2) was introduced. Many SDN solutions plug-in into Neutron via this ML2 interface and a mechanism driver.
It should be noted that ML2 is focused on L2 Neutron resources. In order to implement L3 resources (like routing, NAT) there is a need for an L3 service plugin. Similarly, there are separate driver interfaces for L4-L7 resources such as LBaaS, VPNaaS, FWaaS, BGP/VPN and so on. So depending on the SDN solution and its capabilities you may see a combination of mechanism driver, an L3 service plug-in, and L4-L7 drivers. As an example, the OpenDaylight plug-in for Neutron (aka networking-odl) includes an ML2 mechanism driver, but also a bunch of other drivers for L3-L7 services, see https://github.com/openstack/networking-odl.

Use more than one physical interface on devstack ocata

I have a barebones PC with 6 physical interfaces to 'play' around with devstack. I'd like to deploy VMs on it and connect them to different Ethernet interfaces. Currently, I'm only able to use the interface associated to the br-ex bridge. I've tried to define several OVS bridges and assign the physical interfaces to different bridges. I try to define a mapping with more than one bridge, but that doesn't seem to work.
Has anyone had any success on this.
Thanks, /PA
You will need to create "provider networks", which allow you to associate neutron networks with specific physical interfaces. You can find documentation on creating provider networks in the install guide.
Once everything is configured correctly, you can attach new Nova servers to your provider networks by passing the appropriate --network argument when you boot a new server.

ODL and Openstack issue

I am running one controller and one compute node. Controller node is running both ODL and OpenStack. I created tenants, underthem i created networks and launched instances on them. All is see on ODL web GUI is 3 swithces, and I guess those are br-int and br-ex of controller and br-int of compute and the links are missing too. Is there anyway where I can see my whole OpenStack topology on ODL GUI with the links?
Please help me
If I get you right, you want to see all the ovs-bridges used in openstack and all the VMs attached to it. Well you can definitely see that.Just ping all the VMs from one VM just to make sure flows are added with MAC address of VMs on OVS by controller and then you will see the topology on the GUI.

Checking small network in mininet by Opendaylight

I have a question about Checking small network by opendaylight
I am not really sure why I can't accress Opendaylight menu which I created from mininet
I am using windows 7 and VMWare player to run mininet and opendaylight(on Ubuntu).
First, i run Ubuntu to run Opendaylight ( I checked that 120.0.0.1:8080 was working)
Second, I run mininet to get IP address.( i will say "192.168.139.128")
Third, based on IP address , it run two putty to run wireshark and building small network
I used sudo mn --mac --controller=remote, ip=192.168.139.128, port=6633.
It successfully build small network. Because i can check all the node infor by command "nodes" and "dump"
However, when I go back to Ubuntu VM and access Opendaylight by 192.168.139.128:8080 (it is given IP from mininet)
I am sure really sure why this happen. Is there any possible reasons ?
Just in case, if anyone is facing the same issue, change network adapter settings in VMWare to use bridge mode.
From the official page
Important troubleshooting - if you are running VirtualBox on the same
host/desktop where the controller is running, and trying to start the
virtual network on Mininet VM produces this error: "Unable to contact
the remote controller at ...", then the following resolves the
problem:
In VirtualBox, go to File-Preferences-Network and make sure you have at least one interface defined as Host-Only. Lets say its name is
vboxnet0
In VirtualBox - Mininet Vm - Settings - Network, check that the adapter is of type Host only , and is connected to the interface from
item 1 (vboxnet0)
On your host where controller and VirtualBox run, do "ifconfig" command to display all network interfaces on the machine.
Search for the interface as in item 1 (vboxnet0 in our example) Take
the ip address specified there (most probably 192.168.56.1 - default),
and that is the correct remote controller ip address to use when
starting a virtual network in mininet vm as stated in the example
above (--controller=remote,ip=192.168.56.1) .
If you are still not able to connect, you might want to consider temporarily disabling firewall on the host running the controller (on
Linux, for example, iptables -F will do the job)
Sometimes, the way you start the mininet is a problem, it does not give error, but does not connect to the remote server. Here is a wrong
example:
sudo mn --topo=tree,3 --mac --switch=ovsk --controller=remote,
ip=192.168.16.10
Here is the correct example:
sudo mn --topo=tree,3 --mac --switch=ovsk
--controller=remote,ip=192.168.16.10
The difference is the "SPACE" between "remote," and "ip".
Also check if you are VMWare player, at IP use this command
sudo mn --mac --controller=remote,--ip=192.168.139.128 --topo tree,5
and refresh your OpenDay Light Controller.
The easiest way is to install Gnome on your Mininet/ODL virtual machine.
I am using the latest (Helium) ODL release so the GUI of ODL is at http://localhost:8181/dlux/indexh.html
on Helium, ODL run inside your distribution folder with ./bin/karaf command (also install required modules inside karaf with feature:install
Attached is my screenshot: https://pbs.twimg.com/media/B8ZgSA6CMAAzuSf.jpg:large
Start the Opendaylight and install the odl-dlux-core plugin. After that the OpenDaylight UI can be accessed through browser on port 8181. So try http://localhost:8181/index.html and you login using username password admin/admin. You should see your topology show up on the UI.

Resources