Best Practise to expose service in kubernetes using Calico - networking

Having set up a kubernetes cluster with calico for the one-ip-per-pod networking, I'm wondering what the best practise is to expose services to the outside world.
IMHO I got two options here, BGP'ing the internal pod IP's (172...) to an edge router/firewall (vyos in my case) and do an SNAT on the firewall / router. But then I'd need one public IP per pod to expose.
Pro: less public IP's need to be used
Con: Pod changes need updated firwall rules?!
Or 2nd: Taking the provided public network and hand it over to calico as an IP pool to be used for the pods.
Con: lots of public IP's wasted for internal services which won't get exposed to the internet
Hope someone could enlighten me or point me in the right direction.
Thanks!

Calico doesn't provide any special way to expose services in Kubernetes. You should use standard Kubernetes services, node ports and the like to expose your services. In the future, there's a possibility that Calico will offer some of the features that kube-proxy currently does for Kubernetes (such as exposing service IPs) but right now, Calico fits in at the low-level networking API layer only. Calico's real strength in the Kubernetes integration is the ability to define network security policy using the new Kubernetes NetworkPolicy API.
Source: I'm one of Calico's core developers.

Calico is not responsible for the k8s service IP management or for translating service ip to container (workload endpoint) It allocates IP addressses to the newly created pods and does necessary system config changes to implement the calico policies

Related

Does K8s run on plain Layer2 network infrastructure?

Does K8s run on a plain Layer2 network with no support for Layer3 routing stuff?!?
Im asking as I want to switch my K8s envirnoment from cloud VMs over to Bare Metal and Im not aware of the Privat-network Infrastrcture of the hoster ...
Kind regards and thanks in advance
Kubernetes will run on a more classic, statically defined network you would find on a private network (but does rely on layer 4 networking).
A clusters IP addressing and routing can be largely configured for you by one of the CNI plugins that creates an overlay network or can be configured statically with a bit more work (via kubenet/IPAM).
An overlay network can be setup with tools like Calico, Flannel or Weave that will manage all the routing in cluster as long as all the k8s nodes can route to each other, even over disparate networks. Kubespray is a good place to start with deploying clusters like this.
For a static network configuration, clusters will need permanent static routes added for all the networks kubernetes uses. The "magic" cloud providers and the overlay CNI plugins provide is to is to be able to route all those networks automatically. Each node will be assigned a Pod subnet and every node in the cluster will need to have a route to those IP's

add EC2 nodes to baremetal kubernetes cluster

I have a Kubernetes cluster setup with on bare-metal local nodes(all nodes are accessible through the public network and private network ).
I want to add an EC2 node to this cluster.
I have four nodes as MASTER, WORKER-1, WORKER-2, EC2-NODE.
MASTER, WORKER-1, WORKER-2 has full connectivity through the public and private networks.
But EC2-NODE is only accessible on public networks from any node.
I have tried joining the EC2 node to the cluster and give --node-ip=$public_ip_of_ec2_node,
EC2 node joined successfully and mark as ready but services are not reachable from other nodes to the EC2 node. It joins on the private network interface (eth0) and exposes the private IP of the EC2 node to the cluster.
In the Kubernetes, there is a requirement that all nodes have full internet connectivity between them either private or public. What does it mean?
Is it required to have a single network interface among nodes?
Any help would be nice.
Thank you in advance.
System Info:
Kuberenetes version: 1.16.2
Pod network: Flannel
Let's start with understanding how to implement the Kubernetes networking model:
There are a number of ways that this network model can be implemented.
This document is not an exhaustive study of the various methods, but
hopefully serves as an introduction to various technologies and serves
as a jumping-off point.
There you can find a list of networking options. Among them there is Flannel:
Flannel is a very simple overlay network that satisfies the Kubernetes
requirements. Many people have reported success with Flannel and
Kubernetes.
Flannel is responsible for providing a layer 3 IPv4 network between
multiple nodes in a cluster. Flannel does not control how containers
are networked to the host, only how the traffic is transported between
hosts. However, flannel does provide a CNI plugin for Kubernetes and a
guidance on integrating with Docker.
You are already using Flannel as a CNI plugin.
Please let me know if you find the info above helpful.

Use softEther VPN (virtual adapter) for kubernetes network and default adapter for ingress

I have a softether Vpn server hosted on ubuntu server 16.04, I can connect to the vpn from other linux/windows machines. My goal is to use the vpn only for Kubernetes networking or when the server is making a web request. but I don't want to use the vpn to expose my nodePorts/Ingress/loadbalancers. I want to use the default adapter (eth0) to exposes those. I am not an linux expert or a network engineer. Is this possible? If yes, please help. thanks
Ingress controllers and loadbalancers usually rely on the NodePort functionality which in turn relies on Kubernetes network layer. Kubernetes has some network requirements to ensure all its functionalities work as expected.
Because SoftEther VPN supports Layer2 connectivity it's possible to use it for connecting cluster nodes.
To limit its usage for NodePorts and LBs you just need to ensure that nodes on other side of the VPN haven't been included in the LB pool used for traffic forwarding to NodePort services which may require managing LB pool manually or use CloudAPI call from some scripts.
Ingress controllers are usually exposed by NodePort also, so the same thing here.

Kubernetes service makes outbound connections - how to make it originate from a virtual ip

I currently have a Kubernetes cluster, and we have a service that needs to be accessible from a virtual ip.
This in itself is not a difficult process - can use keepalived and nodeports. However, I need that service when its making outbound connections to be bound to that virtual ip (this is due to a legacy system we interact with).
Is there anything in place or that I can use that will help me with this in a generic way.
I essentially want traffic from a specific service to come out of the virtual ip and not the kubernetes host's ip.
You can use hostNetwork: true for your deployment, this will start your pods outside of NAT, and you will be able to see all the system interfaces.
Keep in mind that nodePort won’t be available when hostNetwork is enabled.

How to expose a service in kubernetes running on Barematel

Kubernetes Version: 1.10 Running on Barematel
No. of masters: 3
We are running our multiple microservices inside a Kubernetes cluster. Currently, we are exposing these services outside of the cluster using NodePort. Each microservice has it's own NodePort so we have to maintain a list with the corresponding microservices. Since we are running on Barematel we don't have features like LodeBalancer while exposing a microservice.
Problem: - Since we have multiple masters and workers inside the cluster we have to use a static IP or DNS for any master at a time. If I want to access any service from outside the cluster I have to use as - IP_ADDRESS:NODEPORT or DNS:NODEPORT. At a time I can use the address of any one master. If that master goes gown then I have to change microservices address with other master's address. I don't want to use a static IP or DNS of any master.
What could we a better way to expose these microservices without NodePort? Is there any feature like LoadBalancer over Baremetal? Can INGRESS or Nginx help us?
There is a LoadBalancer for Baremetal, it's called METALLB. Project is available on GitHub, unfortunately this solution is in alpha state and is more complex.
You can also follow the instructions from NGINX and setup round-robin method for TCP or UDP.
Ingress only supports http(s) over ports 80, 443 only.
You can of course setup your own ingress controller but it will be a lot of extra work.
NodePort downside is a limited number of usable ports which is from 30000 to 32767, and if IP of the machine changes your services will be inaccessible.

Resources