How to configure VPN connection between 2 Kubernetes clusters - networking

How to configure VPN connection between 2 Kubernetes clusters.
The case is:
- 2 kubernetes clusters running on different sites
- OpenVPN connectivity between 2 clusters
- In both kubernetes clusters are installed openvpn running in separate container.
How to configure kubernetes clusters (vpn, routing, firewall configurations) so, the Nodes and Containers of any of the kubernetes clusters to have connectivity through VPN to nodes and services to the other cluster?
Thank you for the answers !!!

You can use Submariner to connect multiple clusters, it creates a secure and performant connection between the clusters on-premises and on public clouds, then you can export the services and access them across all clusters in the cluster set.
Usually we use this tool to create multiple K8S clusters in different geographical locations, then replicate the databases across all the clusters to avoid data loss in case of any data center incident.

What you need in Kubernetes is called federation.
Deprecated
Use of Federation v1 is strongly discouraged. Federation V1 never achieved GA status and is no longer under active development. Documentation is for historical purposes only.
For more information, see the intended replacement, Kubernetes Federation v2.
As for using a VPN in Kubernetes, I recommend Exposing Kubernetes cluster over VPN.
It describes how to connect VPN node to kuberentes cluster or Kubernetes services.
You might be also interested in reading Kubernetes documentation regarding Running in Multiple Zones.
Also Kubernetes multi-cluster networking made simple, which explains different use cases of VPNs across number of clusters and is strongly encouraging to use IPv6 instead of IPv4.
Why use IPv6? Because “we could assign a — public — IPv6 address to EVERY ATOM ON THE SURFACE OF THE EARTH, and still have enough addresses left to do another 100+ earths” [SOURCE]
Lastly Introducing kEdge: a fresh approach to cross-cluster communication, which seems to make live easier and helps with configuration and maintenance of VPN services between clusters.

Submariner is a very good solution but unfortunately doesn't support IPv6 yet so if your use case has ipv6 or dualstack clusters, then it could be an issue

Related

Does K8s run on plain Layer2 network infrastructure?

Does K8s run on a plain Layer2 network with no support for Layer3 routing stuff?!?
Im asking as I want to switch my K8s envirnoment from cloud VMs over to Bare Metal and Im not aware of the Privat-network Infrastrcture of the hoster ...
Kind regards and thanks in advance
Kubernetes will run on a more classic, statically defined network you would find on a private network (but does rely on layer 4 networking).
A clusters IP addressing and routing can be largely configured for you by one of the CNI plugins that creates an overlay network or can be configured statically with a bit more work (via kubenet/IPAM).
An overlay network can be setup with tools like Calico, Flannel or Weave that will manage all the routing in cluster as long as all the k8s nodes can route to each other, even over disparate networks. Kubespray is a good place to start with deploying clusters like this.
For a static network configuration, clusters will need permanent static routes added for all the networks kubernetes uses. The "magic" cloud providers and the overlay CNI plugins provide is to is to be able to route all those networks automatically. Each node will be assigned a Pod subnet and every node in the cluster will need to have a route to those IP's

How to connect multiple cloud with overlapping VPC?

We are creating a Console to administer, view logs and metrics, create resources on Kubernetes in a multicloud environment.
The Console ( a web app ) is deployed on GKE in GCP, but we can't figure out how we can connect and reach K8S Api-Servers in multiple VPC with overlapping IPs, without exposing them on public IP.
I draw a little diagram to expose the problem.
Are there some products or best practice to perform this securely?
Product vendors for example Mongo Atlas or Confluent Cloud seems to have solved this issue, they can create infrastructure in multiple cloud and administer them.
It's not possible to connect two overlapping networks with VPN even if they're in different clouds (GCP & AWS).
I'd suggest to use NAT translation on both sides and connect networks using VPN.
Here's some documentation that may help you. Unfortunatelly it's quite a bit of reading and setting up. Not the easiest solution but it has the benefit of being reliable and it's a quite old and tested approach.
General docs
Configure NAT to Enable Communication Between Overlapping Networks
Using NAT in Overlapping Networks
GCP side
Cloud NAT overview
Using Cloud NAT
AWS side
NAT instances
Comparison of NAT instances and NAT gateways
You second option is to split the original networks in smaller chunks so they wold not overlap but that's not always possible (due to network being small enough already and many IP's are used up...).
It depends on couple factors in the environments.
To access an overlapping network you need some form of gateway.
it can be some kind of proxy socks/http/other or a router/gw(with nat..).
If you can access the 192.168.23.0/24 or any other subnet that can connect to the aws 192.168.2.0/24 subnet from gcp then you can use either one of the solutions.
I assume that aws and gcp can provide the tunnel between the gw/proxy network.
If you don't need security layer for the tunnel you can use a vxlan tunnel and secure the tcp/other app protocol.
Using Google Cloud VPN with AWS Virtual Private Gateway you can accomplish such a thing. A detailed description by Google is given in this documentation.
It describes two VPN topologies:
A site-to-site Route-based IPsec VPN tunnel configuration.
A site-to-site IPsec VPN tunnel configuration using Google Cloud Router and dynamic routing with the BGP protocol.
Additionally, when CIDR-ranges overlap. You would need to create a new VPC/CIDR ranges that are non-overlapping. Otherwise, you could never connect to instances that have IP-addresses in both AWS and GCP.

Should bare metal k8s clusters have physical network segregation?

I'm looking to deploy a bare metal k8s cluster.
Typically when I deploy k8s clusters, I have two networks. Control Plane, and Nodes. However, in this cluster I'd like to leverage rook to present storage (ceph/nfs).
Most advice I get and articles I read say that systems like ceph need their own backend, isolated cluster network for replication etc - ceph reference docs. Moreover, a common datacenter practice is to have a separate network for NFS.
How are these requirements and practices adopted in a k8s world? Can the physical network just be flat, and the k8s SDN does all the heavy lifting here? Do I need to configure network policies and additional interfaces to provide physical segregation for my resources?
Ceph best practice is to have separate "cluster network" for replication/rebalancing and client-facing network (so called "public network") which is used by clients (like K8s nodes) to connect to Ceph. Ceph cluster network is totally different from K8s cluster network. Those are simply two different things. Ideally they should live on different NICs and switches/switchports.
If you have separate NICs towards Ceph cluster then you can create interfaces on K8s nodes to interact with Ceph's "public network" using those dedicated NICs. So there will be separate interfaces for K8s management/inter-pod traffic and separate interfaces for storage traffic.

How to use one external ip for multiple instances

In GCloud we have one Kubernetes cluster with two nodes, it is possible to setup all nodes to get the same external IP? Now we are getting two external IP's.
Thank you in advance.
The short answer is no, you cannot assign the very same external IP to two nodes or two instances, but you can use the same IP to access them, for example through a LoadBalancer.
The long answer
Depending on your scenario and the infrastructure you want to set up, several ways are available to expose different resources through the very same IP.
I do not know why you want to assign the same IP to the nodes, but since each node it is a Google Compute Engine instance you can set up a Load Balancer (TCP, SSL, HTTP(s), internal, ecc). In this way you reach the nodes as if they were not part of a Kubernetes cluster, basically you are treating them as Compute Engine instances and you will able to connect to any port they are listening on (for example an HTTP server or an external health check).
Notice that you will be not able to connect to the PODs in this way: the services and the containers are running in a separate software bases network and they will be not reachable if not properly set, for example with a NodePort.
On the other hand if you are interested in making your PODs running in two different kubernetes nodes reachable through a unique entry point you have to set up Kubernetes related ingress and load balancing to expose your services. This resources are based as well on the Google Cloud Platform Load Balancer components, but when created they trigger as well the required change to the Kubernetes Network.

Network settings in Openstack with single OpenVPN connection

I'm trying to set up an Openstack environment with two Kubernetes clusters, one production and one testing. My idea was to separate them with two networks in Openstack and then have a VPN in front, to limit the exposure through floating ip:s (for this I would have a proxy that routes requests into the correct internal addresses).
However, issues arise when trying to tunnel requests to both networks when connected to the VPN. Either I choose to run the VPN in its own network or in one of the two, but I don't seem to be able to make requests across network boundaries.
Is there a better way to configure the networking in Openstack or OpenVPN, so that I can keep the clusters separated and still have access to all resources through one installation of OpenVPN?
Is it better to run everything in the same Openstack network and separate them with subnets? Can I still have the production and test cluster expose different IP-addresses externally? Are they still separated enough to limit the risk of them accessing each other?
Sidenote: I use Terraform to deploy the infrastructure and Ansible to install resources, if someone has suggestion in the line of already prepared scripts.
Thanks,
The solution I went for was to separate the environments with their own networks and cidr and then attach them to the VPN instance to let it get access to them. From there I just tunnel everything.

Resources