Openflow and nginx webserver - nginx

I am a newbie to openstack. I understand that neutron can be used to deploy openflow compatible network L2-L3 devices e.g ovs deployed on the fly. Can this be extended to deploying say L7 devices e.g. webservers like nginx. ? Googling doesnt yield any tangible answers. Inputs appreciated

Neutron has been designed to cater to openstack's l2 and l3 needs alone. If you are looking for deploying other services you should take a look at the heat project.

Related

kubernetes Load Balancing with Parallels RAS or NGINX

Maybe I am way off in my pursuit to create as close to real a Kubernetes setup on a local network :-)
Is it possible to use Parallels(Desktop) RAS as a Loadbalancer for Kubernetes?
1. I am running a master node in Ubuntu on Parallels Desktop
2. And some worker nodes also in Parallels Desktop
Both using a bridge network.
It would be cool if it is possible to have a setup including a LoadBalancer.
You could use MetalLB, KubeVIP, or Keepalived-operator (with HAProxy). I played around with KubeVIP but now use MetalLB L2 in my RasberryPi based Kubernetes cluster. MetalLB BGP would be even better if you have a router that supports the protocol (such as Unifi). The following references might help further:
https://www.openshift.com/blog/self-hosted-load-balancer-for-openshift-an-operator-based-approach
https://www.youtube.com/watch?v=9PLw1xalcYA
http://blog.cowger.us/2019/02/10/using-metallb-with-the-unifi-usg-for-in-home-kubernetes-loadbalancer-services.html

Does K8s run on plain Layer2 network infrastructure?

Does K8s run on a plain Layer2 network with no support for Layer3 routing stuff?!?
Im asking as I want to switch my K8s envirnoment from cloud VMs over to Bare Metal and Im not aware of the Privat-network Infrastrcture of the hoster ...
Kind regards and thanks in advance
Kubernetes will run on a more classic, statically defined network you would find on a private network (but does rely on layer 4 networking).
A clusters IP addressing and routing can be largely configured for you by one of the CNI plugins that creates an overlay network or can be configured statically with a bit more work (via kubenet/IPAM).
An overlay network can be setup with tools like Calico, Flannel or Weave that will manage all the routing in cluster as long as all the k8s nodes can route to each other, even over disparate networks. Kubespray is a good place to start with deploying clusters like this.
For a static network configuration, clusters will need permanent static routes added for all the networks kubernetes uses. The "magic" cloud providers and the overlay CNI plugins provide is to is to be able to route all those networks automatically. Each node will be assigned a Pod subnet and every node in the cluster will need to have a route to those IP's

How to enable and leverage OVS-DPDK in Openstack

I have successfully deployed openstack packstack (all-in-one) in single VM. I'm running Centos7 VM. Everything work well and able to instantiate VM over native OVS networking.
I plan to have OVS-DPDK enable in my openstack. I have gone through some documentation but still not clear how to enable it. I understand Openstack Queens OVS is already DPDK support.I have seen people asking the same but no answer and I would like again to request on how I can have DPDK support to be enabled over my running openstack. Few method to change neutron configuration file or by deploying SDN controller. Hope no need to redeploy..anyway further advise would be much appreciated.
This is my current ovs version:-
ovs-vswitchd --version
ovs-vswitchd (Open vSwitch) 2.11.0
DPDK 18.11.0
I really appreciated your help and support to my question.
Please assist. Thank you
Openstack and OVS are 2 different processes, which communicates via openflow rules. That is using neutron plugin which is used in OpenStack needs to configured for the ports in use. In OVS one needs to start with controller IP openstack neutron controller. Hence changes are listed as
use neutron or similar plugin for network configuration.
updated the neutron plugin config file for desired ports for OVS.
OVS vswitchd is started with IP address of controller.
Note: OVS binary needs to be build with DPDK libraries for having DPDK support. Do not expect by installing DPDK on the distro the OVS binary will become DPDK supported.

Why would someone add an OpenDaylight controller in an OpenStack cloud?

I have seen a lot of people saying that OpenDaylight is the best SDN controller for OpenStack. While I understand the benefits of a software defined network, I can't see how ODL is better than Neutron in an OpenStack cloud.
In both cases, the Open vSwitch are automatically configured (either by Neutron or ODL), and the end user just use the regular OpenStack online interface or the command line to create his networks and VMs.
So, why is the reason people add an SDN controller, especially OpenDaylight, in a cloud like OpenStack ?
Thank in advance.
First, let me clarify that Neutron is only an API layer and always needs a backend (configured as a Neutron plug-in) service in order to implement the actual networking for the OpenStack cloud. In many cases, the Open vSwitch (OVS) plug-in is configured out of the box, and people are mixing Neutron and the actual OVS-based Neutron implementation.
To answer your question: OpenStack and Neutron are all about choice. If the OVS solution for Neutron is good enough for you, then great -- you don't need an "SDN" nor OpenDaylight in the mix. But some find this solution not good enough for them, typically because of missing functionality like controlling both the virtual and physical network from one place, bridging between Neutron overlay networks (typically VXLAN VNIs) and existing networks in the data center (VLAN, IP/MPLS, etc.), connecting OpenStack projects with other (non-OpenStack) infrastructure (e.g VMware, public cloud), and so on. This is where the OVS solution is being replaced with another "SDN".
Around Icehouse time-frame, the concept of Modular Layer 2 (ML2) was introduced. Many SDN solutions plug-in into Neutron via this ML2 interface and a mechanism driver.
It should be noted that ML2 is focused on L2 Neutron resources. In order to implement L3 resources (like routing, NAT) there is a need for an L3 service plugin. Similarly, there are separate driver interfaces for L4-L7 resources such as LBaaS, VPNaaS, FWaaS, BGP/VPN and so on. So depending on the SDN solution and its capabilities you may see a combination of mechanism driver, an L3 service plug-in, and L4-L7 drivers. As an example, the OpenDaylight plug-in for Neutron (aka networking-odl) includes an ML2 mechanism driver, but also a bunch of other drivers for L3-L7 services, see https://github.com/openstack/networking-odl.

Configuring openstack for a in-house test cloud

We're currently looking to migrate an old and buggy eucalyptus cloud to openstack. We have ~15 machines that are all on the same office-internal network. The instances get their network configuration from an external (not eucalyptus) DHCP server. We run both linux and windows images. The cloud is used exclusively for platform testing from Jenkins.
Looking into openstack, it seems that out of the three supported networking modes, none really fit our environment. What we are looking for is something like an "unmanaged mode" where openstack launches an instance that is hooked up to eth0 interface on the instances' compute node and which will receive its network configuration from the external DHCP on boot. I.e. the VM's, guest hosts and clients (jenkins) are all on the same network, managed by an external DHCP server.
Is a scenario like this possible to set up in OpenStack?
It's not commonly used, but the Networking setup that will fit your needs the best is FlatNetworking (not FlatDHCPNetworking). There isn't stellar documentation on configuring that setup to work through your environment, and some pieces (like the nova-metadata service) may be a bit tricky to manage with it, but that should accomplish allowing you to run an OpenStack cloud with an external DHCP provider.
I wrote up the wiki page http://wiki.openstack.org/UnderstandingFlatNetworking some time ago to explain the setup of the various networks and how they operate with regards to NICs on hosting systems. FlatNetworking is effectively the same as FlatDHCPNetworking except that OpenStack doesn't try and run the DHCP service for you.
Note that with this mode, all the VM instances will be on the same network with your OpenStack infrastructure - there's no separation of networks at all.

Resources