Openstack with neutron on two physical nodes - openstack

We have two physical system(ubuntu14.04.2) having 2 physical NIC each.
Is it possible to install openstack(juno) with neutron on same ?
Official documentation says that we need 3 nodes with network node having 3 NICs
Any help would be greatly appreciated.
Thanks,
Deepak

You can install all of OpenStack on a single system for development and testing purposes. Given that a single node installation is possible, it should follow that a two-node installation is also possible (and it is).
The documentation recommends three NICs because this leads to the simplest configuration. However, you can run a network host with two NICs. There are several different traffic types you'll be dealing with:
Public web (Horizon) traffic
Public API traffic (if you expose the APIs)
Internal API traffic
Tenant internal network traffic (traffic between Nova instances and the compute host)
Tenant external network traffic (traffic between Nova instances and "the rest of the world")
Storage (transferring Glance images, iSCSI for Cinder volumes, etc)
Being able to segment these in a meaningful fashion can lead to a more manageable and more performant environment. With only two NICs, you are probably looking at one for "internal traffic" (interal api, storage, tenant internal networking, etc) and one for "external traffic" (dashboard, public apis, tenant external traffic). This is certainly possible, but it means, for example, that excessive traffic from your tenants can impact access to the dashboard, and that a high volume of storage traffic can impact access to Nova instances.
If/when your environment grows beyond two nodes, you may want to investigate adding additional NICs to your configuration.

Related

Does K8s run on plain Layer2 network infrastructure?

Does K8s run on a plain Layer2 network with no support for Layer3 routing stuff?!?
Im asking as I want to switch my K8s envirnoment from cloud VMs over to Bare Metal and Im not aware of the Privat-network Infrastrcture of the hoster ...
Kind regards and thanks in advance
Kubernetes will run on a more classic, statically defined network you would find on a private network (but does rely on layer 4 networking).
A clusters IP addressing and routing can be largely configured for you by one of the CNI plugins that creates an overlay network or can be configured statically with a bit more work (via kubenet/IPAM).
An overlay network can be setup with tools like Calico, Flannel or Weave that will manage all the routing in cluster as long as all the k8s nodes can route to each other, even over disparate networks. Kubespray is a good place to start with deploying clusters like this.
For a static network configuration, clusters will need permanent static routes added for all the networks kubernetes uses. The "magic" cloud providers and the overlay CNI plugins provide is to is to be able to route all those networks automatically. Each node will be assigned a Pod subnet and every node in the cluster will need to have a route to those IP's

How to use one external ip for multiple instances

In GCloud we have one Kubernetes cluster with two nodes, it is possible to setup all nodes to get the same external IP? Now we are getting two external IP's.
Thank you in advance.
The short answer is no, you cannot assign the very same external IP to two nodes or two instances, but you can use the same IP to access them, for example through a LoadBalancer.
The long answer
Depending on your scenario and the infrastructure you want to set up, several ways are available to expose different resources through the very same IP.
I do not know why you want to assign the same IP to the nodes, but since each node it is a Google Compute Engine instance you can set up a Load Balancer (TCP, SSL, HTTP(s), internal, ecc). In this way you reach the nodes as if they were not part of a Kubernetes cluster, basically you are treating them as Compute Engine instances and you will able to connect to any port they are listening on (for example an HTTP server or an external health check).
Notice that you will be not able to connect to the PODs in this way: the services and the containers are running in a separate software bases network and they will be not reachable if not properly set, for example with a NodePort.
On the other hand if you are interested in making your PODs running in two different kubernetes nodes reachable through a unique entry point you have to set up Kubernetes related ingress and load balancing to expose your services. This resources are based as well on the Google Cloud Platform Load Balancer components, but when created they trigger as well the required change to the Kubernetes Network.

Network settings in Openstack with single OpenVPN connection

I'm trying to set up an Openstack environment with two Kubernetes clusters, one production and one testing. My idea was to separate them with two networks in Openstack and then have a VPN in front, to limit the exposure through floating ip:s (for this I would have a proxy that routes requests into the correct internal addresses).
However, issues arise when trying to tunnel requests to both networks when connected to the VPN. Either I choose to run the VPN in its own network or in one of the two, but I don't seem to be able to make requests across network boundaries.
Is there a better way to configure the networking in Openstack or OpenVPN, so that I can keep the clusters separated and still have access to all resources through one installation of OpenVPN?
Is it better to run everything in the same Openstack network and separate them with subnets? Can I still have the production and test cluster expose different IP-addresses externally? Are they still separated enough to limit the risk of them accessing each other?
Sidenote: I use Terraform to deploy the infrastructure and Ansible to install resources, if someone has suggestion in the line of already prepared scripts.
Thanks,
The solution I went for was to separate the environments with their own networks and cidr and then attach them to the VPN instance to let it get access to them. From there I just tunnel everything.

Can we send requests directly on a bare metal machine in OpenStack?

Do I need Controllers and Neutron nodes to send requests on a bare metal machine in OpenStack? Can we send requests directly on a bare metal machine without passing them through controller/neutron nodes?
Provider networking allows you to attach Nova instances directly to existing layer 2 networks so that they do not need to transit the Neutron controller for either local or external network access. You can mix-and-match provider networks with normal OpenStack virtual networks depending on your needs and available network resources.
This same solution would allow baremetal machines to communicate without involving the Neutron host.

OpenStack Compute-node communicate/ping vms run on it

In Ceilometer, when pollsters collect meter from VMs, it used hypervisor on compute-node. Now, I want to write new plugin for ceilometer and not use hypervisor to collect meter, I want to collect meter by a service that is installed on VMs (mean ceilometer get data from service), so I need compute-node must communicate with VMs by IP (private IP). Is there any solution to do this?
Thanks all.
In general the internal network used by your Nova instances is kept intentionally separate from the compute hosts themselves as a security precaution (to ensure that someone logged into a Nova server isn't able to compromise your host).
For what you are proposing, it would be better to adopt a push model rather than a pull model: have a service running inside your instances that would publish data to some service accessible at a routeable ip address.

Resources