I have a barebones PC with 6 physical interfaces to 'play' around with devstack. I'd like to deploy VMs on it and connect them to different Ethernet interfaces. Currently, I'm only able to use the interface associated to the br-ex bridge. I've tried to define several OVS bridges and assign the physical interfaces to different bridges. I try to define a mapping with more than one bridge, but that doesn't seem to work.
Has anyone had any success on this.
Thanks, /PA
You will need to create "provider networks", which allow you to associate neutron networks with specific physical interfaces. You can find documentation on creating provider networks in the install guide.
Once everything is configured correctly, you can attach new Nova servers to your provider networks by passing the appropriate --network argument when you boot a new server.
Related
I have seen a lot of people saying that OpenDaylight is the best SDN controller for OpenStack. While I understand the benefits of a software defined network, I can't see how ODL is better than Neutron in an OpenStack cloud.
In both cases, the Open vSwitch are automatically configured (either by Neutron or ODL), and the end user just use the regular OpenStack online interface or the command line to create his networks and VMs.
So, why is the reason people add an SDN controller, especially OpenDaylight, in a cloud like OpenStack ?
Thank in advance.
First, let me clarify that Neutron is only an API layer and always needs a backend (configured as a Neutron plug-in) service in order to implement the actual networking for the OpenStack cloud. In many cases, the Open vSwitch (OVS) plug-in is configured out of the box, and people are mixing Neutron and the actual OVS-based Neutron implementation.
To answer your question: OpenStack and Neutron are all about choice. If the OVS solution for Neutron is good enough for you, then great -- you don't need an "SDN" nor OpenDaylight in the mix. But some find this solution not good enough for them, typically because of missing functionality like controlling both the virtual and physical network from one place, bridging between Neutron overlay networks (typically VXLAN VNIs) and existing networks in the data center (VLAN, IP/MPLS, etc.), connecting OpenStack projects with other (non-OpenStack) infrastructure (e.g VMware, public cloud), and so on. This is where the OVS solution is being replaced with another "SDN".
Around Icehouse time-frame, the concept of Modular Layer 2 (ML2) was introduced. Many SDN solutions plug-in into Neutron via this ML2 interface and a mechanism driver.
It should be noted that ML2 is focused on L2 Neutron resources. In order to implement L3 resources (like routing, NAT) there is a need for an L3 service plugin. Similarly, there are separate driver interfaces for L4-L7 resources such as LBaaS, VPNaaS, FWaaS, BGP/VPN and so on. So depending on the SDN solution and its capabilities you may see a combination of mechanism driver, an L3 service plug-in, and L4-L7 drivers. As an example, the OpenDaylight plug-in for Neutron (aka networking-odl) includes an ML2 mechanism driver, but also a bunch of other drivers for L3-L7 services, see https://github.com/openstack/networking-odl.
When using Docker swarm mode and exposing ports outside, you have at least three networks, the ingress network, the bridge network and the overlay network (used for internal cluster communications). The container joins these networks using one of eth0-2 (randomically each time) interfaces and from an application point of view is not easy to understand which of these is the cluster network (the correct one to use for service discovery client publish - e.g Spring Eureka).
Is there a way to customize network interface names in some way?
Not a direct answer to your question, but one of the key selling points of swarm mode is the built-in service discovery mechanism, which in my opinion works really nicely.
More related, I don't think it's possible to specify the desired interface for an overlay network. However, when creating a network, it is possible to define the subnet or the IP range of the network (https://docs.docker.com/engine/reference/commandline/network_create/). You could use that to identify the interface belonging to your overlay network, by checking if the bound IP address is part of the network you want to publish on.
Do I need Controllers and Neutron nodes to send requests on a bare metal machine in OpenStack? Can we send requests directly on a bare metal machine without passing them through controller/neutron nodes?
Provider networking allows you to attach Nova instances directly to existing layer 2 networks so that they do not need to transit the Neutron controller for either local or external network access. You can mix-and-match provider networks with normal OpenStack virtual networks depending on your needs and available network resources.
This same solution would allow baremetal machines to communicate without involving the Neutron host.
I am running one controller and one compute node. Controller node is running both ODL and OpenStack. I created tenants, underthem i created networks and launched instances on them. All is see on ODL web GUI is 3 swithces, and I guess those are br-int and br-ex of controller and br-int of compute and the links are missing too. Is there anyway where I can see my whole OpenStack topology on ODL GUI with the links?
Please help me
If I get you right, you want to see all the ovs-bridges used in openstack and all the VMs attached to it. Well you can definitely see that.Just ping all the VMs from one VM just to make sure flows are added with MAC address of VMs on OVS by controller and then you will see the topology on the GUI.
We're currently looking to migrate an old and buggy eucalyptus cloud to openstack. We have ~15 machines that are all on the same office-internal network. The instances get their network configuration from an external (not eucalyptus) DHCP server. We run both linux and windows images. The cloud is used exclusively for platform testing from Jenkins.
Looking into openstack, it seems that out of the three supported networking modes, none really fit our environment. What we are looking for is something like an "unmanaged mode" where openstack launches an instance that is hooked up to eth0 interface on the instances' compute node and which will receive its network configuration from the external DHCP on boot. I.e. the VM's, guest hosts and clients (jenkins) are all on the same network, managed by an external DHCP server.
Is a scenario like this possible to set up in OpenStack?
It's not commonly used, but the Networking setup that will fit your needs the best is FlatNetworking (not FlatDHCPNetworking). There isn't stellar documentation on configuring that setup to work through your environment, and some pieces (like the nova-metadata service) may be a bit tricky to manage with it, but that should accomplish allowing you to run an OpenStack cloud with an external DHCP provider.
I wrote up the wiki page http://wiki.openstack.org/UnderstandingFlatNetworking some time ago to explain the setup of the various networks and how they operate with regards to NICs on hosting systems. FlatNetworking is effectively the same as FlatDHCPNetworking except that OpenStack doesn't try and run the DHCP service for you.
Note that with this mode, all the VM instances will be on the same network with your OpenStack infrastructure - there's no separation of networks at all.