I'm looking to deploy a bare metal k8s cluster.
Typically when I deploy k8s clusters, I have two networks. Control Plane, and Nodes. However, in this cluster I'd like to leverage rook to present storage (ceph/nfs).
Most advice I get and articles I read say that systems like ceph need their own backend, isolated cluster network for replication etc - ceph reference docs. Moreover, a common datacenter practice is to have a separate network for NFS.
How are these requirements and practices adopted in a k8s world? Can the physical network just be flat, and the k8s SDN does all the heavy lifting here? Do I need to configure network policies and additional interfaces to provide physical segregation for my resources?
Ceph best practice is to have separate "cluster network" for replication/rebalancing and client-facing network (so called "public network") which is used by clients (like K8s nodes) to connect to Ceph. Ceph cluster network is totally different from K8s cluster network. Those are simply two different things. Ideally they should live on different NICs and switches/switchports.
If you have separate NICs towards Ceph cluster then you can create interfaces on K8s nodes to interact with Ceph's "public network" using those dedicated NICs. So there will be separate interfaces for K8s management/inter-pod traffic and separate interfaces for storage traffic.
Related
Does K8s run on a plain Layer2 network with no support for Layer3 routing stuff?!?
Im asking as I want to switch my K8s envirnoment from cloud VMs over to Bare Metal and Im not aware of the Privat-network Infrastrcture of the hoster ...
Kind regards and thanks in advance
Kubernetes will run on a more classic, statically defined network you would find on a private network (but does rely on layer 4 networking).
A clusters IP addressing and routing can be largely configured for you by one of the CNI plugins that creates an overlay network or can be configured statically with a bit more work (via kubenet/IPAM).
An overlay network can be setup with tools like Calico, Flannel or Weave that will manage all the routing in cluster as long as all the k8s nodes can route to each other, even over disparate networks. Kubespray is a good place to start with deploying clusters like this.
For a static network configuration, clusters will need permanent static routes added for all the networks kubernetes uses. The "magic" cloud providers and the overlay CNI plugins provide is to is to be able to route all those networks automatically. Each node will be assigned a Pod subnet and every node in the cluster will need to have a route to those IP's
I have a Kubernetes cluster setup with on bare-metal local nodes(all nodes are accessible through the public network and private network ).
I want to add an EC2 node to this cluster.
I have four nodes as MASTER, WORKER-1, WORKER-2, EC2-NODE.
MASTER, WORKER-1, WORKER-2 has full connectivity through the public and private networks.
But EC2-NODE is only accessible on public networks from any node.
I have tried joining the EC2 node to the cluster and give --node-ip=$public_ip_of_ec2_node,
EC2 node joined successfully and mark as ready but services are not reachable from other nodes to the EC2 node. It joins on the private network interface (eth0) and exposes the private IP of the EC2 node to the cluster.
In the Kubernetes, there is a requirement that all nodes have full internet connectivity between them either private or public. What does it mean?
Is it required to have a single network interface among nodes?
Any help would be nice.
Thank you in advance.
System Info:
Kuberenetes version: 1.16.2
Pod network: Flannel
Let's start with understanding how to implement the Kubernetes networking model:
There are a number of ways that this network model can be implemented.
This document is not an exhaustive study of the various methods, but
hopefully serves as an introduction to various technologies and serves
as a jumping-off point.
There you can find a list of networking options. Among them there is Flannel:
Flannel is a very simple overlay network that satisfies the Kubernetes
requirements. Many people have reported success with Flannel and
Kubernetes.
Flannel is responsible for providing a layer 3 IPv4 network between
multiple nodes in a cluster. Flannel does not control how containers
are networked to the host, only how the traffic is transported between
hosts. However, flannel does provide a CNI plugin for Kubernetes and a
guidance on integrating with Docker.
You are already using Flannel as a CNI plugin.
Please let me know if you find the info above helpful.
How to configure VPN connection between 2 Kubernetes clusters.
The case is:
- 2 kubernetes clusters running on different sites
- OpenVPN connectivity between 2 clusters
- In both kubernetes clusters are installed openvpn running in separate container.
How to configure kubernetes clusters (vpn, routing, firewall configurations) so, the Nodes and Containers of any of the kubernetes clusters to have connectivity through VPN to nodes and services to the other cluster?
Thank you for the answers !!!
You can use Submariner to connect multiple clusters, it creates a secure and performant connection between the clusters on-premises and on public clouds, then you can export the services and access them across all clusters in the cluster set.
Usually we use this tool to create multiple K8S clusters in different geographical locations, then replicate the databases across all the clusters to avoid data loss in case of any data center incident.
What you need in Kubernetes is called federation.
Deprecated
Use of Federation v1 is strongly discouraged. Federation V1 never achieved GA status and is no longer under active development. Documentation is for historical purposes only.
For more information, see the intended replacement, Kubernetes Federation v2.
As for using a VPN in Kubernetes, I recommend Exposing Kubernetes cluster over VPN.
It describes how to connect VPN node to kuberentes cluster or Kubernetes services.
You might be also interested in reading Kubernetes documentation regarding Running in Multiple Zones.
Also Kubernetes multi-cluster networking made simple, which explains different use cases of VPNs across number of clusters and is strongly encouraging to use IPv6 instead of IPv4.
Why use IPv6? Because “we could assign a — public — IPv6 address to EVERY ATOM ON THE SURFACE OF THE EARTH, and still have enough addresses left to do another 100+ earths” [SOURCE]
Lastly Introducing kEdge: a fresh approach to cross-cluster communication, which seems to make live easier and helps with configuration and maintenance of VPN services between clusters.
Submariner is a very good solution but unfortunately doesn't support IPv6 yet so if your use case has ipv6 or dualstack clusters, then it could be an issue
I am new to Dockers and containers. I was going through the tutorials for docker and came across this information.
https://docs.docker.com/get-started/part3/#docker-composeyml
networks:
- webnet
networks:
webnet:
What is webnet? The document says
Instruct web’s containers to share port 80 via a load-balanced network called webnet. (Internally, the containers themselves will publish to web’s port 80 at an ephemeral port.)
So, by default, the overlay network is load balanced in docker cluster? What is load balancing algo used?
Actually, it is not clear to me why do we have load balancing on the overlay network.
Not sure I can be clearer than the docs, but maybe rephrasing will help.
First, the doc you're following here uses what is called the swarm mode of docker.
What is swarm mode?
A swarm is a cluster of Docker engines, or nodes, where you deploy services. The Docker Engine CLI and API include commands to manage swarm nodes (e.g., add or remove nodes), and deploy and orchestrate services across the swarm.
From SO Documentation:
A swarm is a number of Docker Engines (or nodes) that deploy services collectively. Swarm is used to distribute processing across many physical, virtual or cloud machines.
So, with swarm mode you have a multi host (vms and/or physical) cluster a machines that communicate with each other through their docker engine.
Q1. What is webnet?
webnet is the name of an overlay network that is created when your stack is launched.
Overlay networks manage communications among the Docker daemons participating in the swarm
In your cluster of machines, a virtual network is the created, where each service has an ip - mapped to an internal DNS entry (which is service name), and allowing docker to route incoming packets to the right container, everywhere in the swarm (cluster).
Q2. So, by default, overlay network is load balanced in docker cluster ?
Yes, if you use the overlay network, but you could also remove the service networks configuration to bypass that. Then you would have to publish the port of the service you want to expose.
Q3. What is load balancing algo used ?
From this SO question answered by swarm master bmitch ;):
The algorithm is currently round-robin and I've seen no indication that it's pluginable yet. A higher level load balancer would allow swarm nodes to be taken down for maintenance, but any sticky sessions or other routing features will be undone by the round-robin algorithm in swarm mode.
Q4. Actually it is not clear to me why do we have load balancing on overlay network
Purpose of docker swarm mode / services is to allow orchestration of replicated services, meaning that we can scale up / down containers deployed in the swarm.
From the docs again:
Swarm mode has an internal DNS component that automatically assigns each service in the swarm a DNS entry. The swarm manager uses internal load balancing to distribute requests among services within the cluster based upon the DNS name of the service.
So you can have deployed like 10 exact same container (let's say nginx with you app html/js), without dealing with private network DNS entries, port configuration, etc... Any incoming request will be automatically load balanced to hosts participating in the swarm.
Hope this helps!
We have two physical system(ubuntu14.04.2) having 2 physical NIC each.
Is it possible to install openstack(juno) with neutron on same ?
Official documentation says that we need 3 nodes with network node having 3 NICs
Any help would be greatly appreciated.
Thanks,
Deepak
You can install all of OpenStack on a single system for development and testing purposes. Given that a single node installation is possible, it should follow that a two-node installation is also possible (and it is).
The documentation recommends three NICs because this leads to the simplest configuration. However, you can run a network host with two NICs. There are several different traffic types you'll be dealing with:
Public web (Horizon) traffic
Public API traffic (if you expose the APIs)
Internal API traffic
Tenant internal network traffic (traffic between Nova instances and the compute host)
Tenant external network traffic (traffic between Nova instances and "the rest of the world")
Storage (transferring Glance images, iSCSI for Cinder volumes, etc)
Being able to segment these in a meaningful fashion can lead to a more manageable and more performant environment. With only two NICs, you are probably looking at one for "internal traffic" (interal api, storage, tenant internal networking, etc) and one for "external traffic" (dashboard, public apis, tenant external traffic). This is certainly possible, but it means, for example, that excessive traffic from your tenants can impact access to the dashboard, and that a high volume of storage traffic can impact access to Nova instances.
If/when your environment grows beyond two nodes, you may want to investigate adding additional NICs to your configuration.