kubernetes Load Balancing with Parallels RAS or NGINX - nginx

Maybe I am way off in my pursuit to create as close to real a Kubernetes setup on a local network :-)
Is it possible to use Parallels(Desktop) RAS as a Loadbalancer for Kubernetes?
1. I am running a master node in Ubuntu on Parallels Desktop
2. And some worker nodes also in Parallels Desktop
Both using a bridge network.
It would be cool if it is possible to have a setup including a LoadBalancer.

You could use MetalLB, KubeVIP, or Keepalived-operator (with HAProxy). I played around with KubeVIP but now use MetalLB L2 in my RasberryPi based Kubernetes cluster. MetalLB BGP would be even better if you have a router that supports the protocol (such as Unifi). The following references might help further:
https://www.openshift.com/blog/self-hosted-load-balancer-for-openshift-an-operator-based-approach
https://www.youtube.com/watch?v=9PLw1xalcYA
http://blog.cowger.us/2019/02/10/using-metallb-with-the-unifi-usg-for-in-home-kubernetes-loadbalancer-services.html

Related

Does K8s run on plain Layer2 network infrastructure?

Does K8s run on a plain Layer2 network with no support for Layer3 routing stuff?!?
Im asking as I want to switch my K8s envirnoment from cloud VMs over to Bare Metal and Im not aware of the Privat-network Infrastrcture of the hoster ...
Kind regards and thanks in advance
Kubernetes will run on a more classic, statically defined network you would find on a private network (but does rely on layer 4 networking).
A clusters IP addressing and routing can be largely configured for you by one of the CNI plugins that creates an overlay network or can be configured statically with a bit more work (via kubenet/IPAM).
An overlay network can be setup with tools like Calico, Flannel or Weave that will manage all the routing in cluster as long as all the k8s nodes can route to each other, even over disparate networks. Kubespray is a good place to start with deploying clusters like this.
For a static network configuration, clusters will need permanent static routes added for all the networks kubernetes uses. The "magic" cloud providers and the overlay CNI plugins provide is to is to be able to route all those networks automatically. Each node will be assigned a Pod subnet and every node in the cluster will need to have a route to those IP's

Rancher Unable to connect Cluster Ip from another project

I'm using rancher 2.3.5 on centos 7.6
in my cluster the "project Network isolation" is enable.
I have 2 projects:
In the projet 1, I have deployed one apache docker that listens to port 80 on cluster IP
[enter image description network isolation config
In the second project, I unable to connect the projet 1 cluster IP
Is the project Network isolation block also the traffic for the cluter IP between the two projects.
Thanks you
Other answers have correctly pointed out how a ClusterIP works from the standpoint of just Kuberentes, however the OP specifies that Rancher is involved.
Rancher provides the concept of a "Project" which is a collection of Kubernetes namespaces. When Rancher is set to enable "Project Network Isolation", Rancher manages Kubernetes Network Policies such that namespaces in different Projects can not exchange traffic on the cluster overlay network.
This creates the situation observed by the OP. When "Project Network Isolation" is enabled, a ClusterIP in one Project is unable to exchange traffic with a traffic source in a different Project, even though they are in the same Kubernetes Cluster.
There is a brief note about this by Rancher documentation:
https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/
and while that document seems to limit the scope to Pods, because Pods and ClusterIPs are allocated from the same network it applies to ClusterIPs as well.
K8s Cluster IPs are restricted for communication with in the cluster. A good read on the CLusterIP, Node Port and the load balancer can be found at https://www.edureka.co/community/19351/clusterip-nodeport-loadbalancer-different-from-each-other .
If your intention is to make the services in the 2 different cluster communicate, then go for the below methods.
Deploy a overlay network for your Nodegroup
Cluster peering

How to configure VPN connection between 2 Kubernetes clusters

How to configure VPN connection between 2 Kubernetes clusters.
The case is:
- 2 kubernetes clusters running on different sites
- OpenVPN connectivity between 2 clusters
- In both kubernetes clusters are installed openvpn running in separate container.
How to configure kubernetes clusters (vpn, routing, firewall configurations) so, the Nodes and Containers of any of the kubernetes clusters to have connectivity through VPN to nodes and services to the other cluster?
Thank you for the answers !!!
You can use Submariner to connect multiple clusters, it creates a secure and performant connection between the clusters on-premises and on public clouds, then you can export the services and access them across all clusters in the cluster set.
Usually we use this tool to create multiple K8S clusters in different geographical locations, then replicate the databases across all the clusters to avoid data loss in case of any data center incident.
What you need in Kubernetes is called federation.
Deprecated
Use of Federation v1 is strongly discouraged. Federation V1 never achieved GA status and is no longer under active development. Documentation is for historical purposes only.
For more information, see the intended replacement, Kubernetes Federation v2.
As for using a VPN in Kubernetes, I recommend Exposing Kubernetes cluster over VPN.
It describes how to connect VPN node to kuberentes cluster or Kubernetes services.
You might be also interested in reading Kubernetes documentation regarding Running in Multiple Zones.
Also Kubernetes multi-cluster networking made simple, which explains different use cases of VPNs across number of clusters and is strongly encouraging to use IPv6 instead of IPv4.
Why use IPv6? Because “we could assign a — public — IPv6 address to EVERY ATOM ON THE SURFACE OF THE EARTH, and still have enough addresses left to do another 100+ earths” [SOURCE]
Lastly Introducing kEdge: a fresh approach to cross-cluster communication, which seems to make live easier and helps with configuration and maintenance of VPN services between clusters.
Submariner is a very good solution but unfortunately doesn't support IPv6 yet so if your use case has ipv6 or dualstack clusters, then it could be an issue

Use softEther VPN (virtual adapter) for kubernetes network and default adapter for ingress

I have a softether Vpn server hosted on ubuntu server 16.04, I can connect to the vpn from other linux/windows machines. My goal is to use the vpn only for Kubernetes networking or when the server is making a web request. but I don't want to use the vpn to expose my nodePorts/Ingress/loadbalancers. I want to use the default adapter (eth0) to exposes those. I am not an linux expert or a network engineer. Is this possible? If yes, please help. thanks
Ingress controllers and loadbalancers usually rely on the NodePort functionality which in turn relies on Kubernetes network layer. Kubernetes has some network requirements to ensure all its functionalities work as expected.
Because SoftEther VPN supports Layer2 connectivity it's possible to use it for connecting cluster nodes.
To limit its usage for NodePorts and LBs you just need to ensure that nodes on other side of the VPN haven't been included in the LB pool used for traffic forwarding to NodePort services which may require managing LB pool manually or use CloudAPI call from some scripts.
Ingress controllers are usually exposed by NodePort also, so the same thing here.

How do networking and load balancer work in docker swarm mode?

I am new to Dockers and containers. I was going through the tutorials for docker and came across this information.
https://docs.docker.com/get-started/part3/#docker-composeyml
networks:
- webnet
networks:
webnet:
What is webnet? The document says
Instruct web’s containers to share port 80 via a load-balanced network called webnet. (Internally, the containers themselves will publish to web’s port 80 at an ephemeral port.)
So, by default, the overlay network is load balanced in docker cluster? What is load balancing algo used?
Actually, it is not clear to me why do we have load balancing on the overlay network.
Not sure I can be clearer than the docs, but maybe rephrasing will help.
First, the doc you're following here uses what is called the swarm mode of docker.
What is swarm mode?
A swarm is a cluster of Docker engines, or nodes, where you deploy services. The Docker Engine CLI and API include commands to manage swarm nodes (e.g., add or remove nodes), and deploy and orchestrate services across the swarm.
From SO Documentation:
A swarm is a number of Docker Engines (or nodes) that deploy services collectively. Swarm is used to distribute processing across many physical, virtual or cloud machines.
So, with swarm mode you have a multi host (vms and/or physical) cluster a machines that communicate with each other through their docker engine.
Q1. What is webnet?
webnet is the name of an overlay network that is created when your stack is launched.
Overlay networks manage communications among the Docker daemons participating in the swarm
In your cluster of machines, a virtual network is the created, where each service has an ip - mapped to an internal DNS entry (which is service name), and allowing docker to route incoming packets to the right container, everywhere in the swarm (cluster).
Q2. So, by default, overlay network is load balanced in docker cluster ?
Yes, if you use the overlay network, but you could also remove the service networks configuration to bypass that. Then you would have to publish the port of the service you want to expose.
Q3. What is load balancing algo used ?
From this SO question answered by swarm master bmitch ;):
The algorithm is currently round-robin and I've seen no indication that it's pluginable yet. A higher level load balancer would allow swarm nodes to be taken down for maintenance, but any sticky sessions or other routing features will be undone by the round-robin algorithm in swarm mode.
Q4. Actually it is not clear to me why do we have load balancing on overlay network
Purpose of docker swarm mode / services is to allow orchestration of replicated services, meaning that we can scale up / down containers deployed in the swarm.
From the docs again:
Swarm mode has an internal DNS component that automatically assigns each service in the swarm a DNS entry. The swarm manager uses internal load balancing to distribute requests among services within the cluster based upon the DNS name of the service.
So you can have deployed like 10 exact same container (let's say nginx with you app html/js), without dealing with private network DNS entries, port configuration, etc... Any incoming request will be automatically load balanced to hosts participating in the swarm.
Hope this helps!

Resources