Rancher Unable to connect Cluster Ip from another project - networking

I'm using rancher 2.3.5 on centos 7.6
in my cluster the "project Network isolation" is enable.
I have 2 projects:
In the projet 1, I have deployed one apache docker that listens to port 80 on cluster IP
[enter image description network isolation config
In the second project, I unable to connect the projet 1 cluster IP
Is the project Network isolation block also the traffic for the cluter IP between the two projects.
Thanks you

Other answers have correctly pointed out how a ClusterIP works from the standpoint of just Kuberentes, however the OP specifies that Rancher is involved.
Rancher provides the concept of a "Project" which is a collection of Kubernetes namespaces. When Rancher is set to enable "Project Network Isolation", Rancher manages Kubernetes Network Policies such that namespaces in different Projects can not exchange traffic on the cluster overlay network.
This creates the situation observed by the OP. When "Project Network Isolation" is enabled, a ClusterIP in one Project is unable to exchange traffic with a traffic source in a different Project, even though they are in the same Kubernetes Cluster.
There is a brief note about this by Rancher documentation:
https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/
and while that document seems to limit the scope to Pods, because Pods and ClusterIPs are allocated from the same network it applies to ClusterIPs as well.

K8s Cluster IPs are restricted for communication with in the cluster. A good read on the CLusterIP, Node Port and the load balancer can be found at https://www.edureka.co/community/19351/clusterip-nodeport-loadbalancer-different-from-each-other .
If your intention is to make the services in the 2 different cluster communicate, then go for the below methods.
Deploy a overlay network for your Nodegroup
Cluster peering

Related

Use softEther VPN (virtual adapter) for kubernetes network and default adapter for ingress

I have a softether Vpn server hosted on ubuntu server 16.04, I can connect to the vpn from other linux/windows machines. My goal is to use the vpn only for Kubernetes networking or when the server is making a web request. but I don't want to use the vpn to expose my nodePorts/Ingress/loadbalancers. I want to use the default adapter (eth0) to exposes those. I am not an linux expert or a network engineer. Is this possible? If yes, please help. thanks
Ingress controllers and loadbalancers usually rely on the NodePort functionality which in turn relies on Kubernetes network layer. Kubernetes has some network requirements to ensure all its functionalities work as expected.
Because SoftEther VPN supports Layer2 connectivity it's possible to use it for connecting cluster nodes.
To limit its usage for NodePorts and LBs you just need to ensure that nodes on other side of the VPN haven't been included in the LB pool used for traffic forwarding to NodePort services which may require managing LB pool manually or use CloudAPI call from some scripts.
Ingress controllers are usually exposed by NodePort also, so the same thing here.

Difference between metalLB and NodePort

What is difference between MetalLB and NodePort?
A node port is a built-in feature that allows users to access a service from the IP of any k8s node using a static port. The main drawback of using node ports is that your port must be in the range 30000-32767 and that there can, of course, be no overlapping node ports among services. Using node ports also forces you to expose your k8s nodes to users who need to access your services, which could pose security risks.
MetalLB is a third-party load balancer implementation for bare metal servers. A load balancer exposes a service on an IP external to your k8s cluster at any port of your choosing and routes those requests to yours k8s nodes.
MetalLB can be deployed either with a simple Kubernetes manifest or with Helm.
MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.
A NodePort is an open port on every node of your cluster. Kubernetes transparently routes incoming traffic on the NodePort to your service, even if your application is running on a different node.

How do networking and load balancer work in docker swarm mode?

I am new to Dockers and containers. I was going through the tutorials for docker and came across this information.
https://docs.docker.com/get-started/part3/#docker-composeyml
networks:
- webnet
networks:
webnet:
What is webnet? The document says
Instruct web’s containers to share port 80 via a load-balanced network called webnet. (Internally, the containers themselves will publish to web’s port 80 at an ephemeral port.)
So, by default, the overlay network is load balanced in docker cluster? What is load balancing algo used?
Actually, it is not clear to me why do we have load balancing on the overlay network.
Not sure I can be clearer than the docs, but maybe rephrasing will help.
First, the doc you're following here uses what is called the swarm mode of docker.
What is swarm mode?
A swarm is a cluster of Docker engines, or nodes, where you deploy services. The Docker Engine CLI and API include commands to manage swarm nodes (e.g., add or remove nodes), and deploy and orchestrate services across the swarm.
From SO Documentation:
A swarm is a number of Docker Engines (or nodes) that deploy services collectively. Swarm is used to distribute processing across many physical, virtual or cloud machines.
So, with swarm mode you have a multi host (vms and/or physical) cluster a machines that communicate with each other through their docker engine.
Q1. What is webnet?
webnet is the name of an overlay network that is created when your stack is launched.
Overlay networks manage communications among the Docker daemons participating in the swarm
In your cluster of machines, a virtual network is the created, where each service has an ip - mapped to an internal DNS entry (which is service name), and allowing docker to route incoming packets to the right container, everywhere in the swarm (cluster).
Q2. So, by default, overlay network is load balanced in docker cluster ?
Yes, if you use the overlay network, but you could also remove the service networks configuration to bypass that. Then you would have to publish the port of the service you want to expose.
Q3. What is load balancing algo used ?
From this SO question answered by swarm master bmitch ;):
The algorithm is currently round-robin and I've seen no indication that it's pluginable yet. A higher level load balancer would allow swarm nodes to be taken down for maintenance, but any sticky sessions or other routing features will be undone by the round-robin algorithm in swarm mode.
Q4. Actually it is not clear to me why do we have load balancing on overlay network
Purpose of docker swarm mode / services is to allow orchestration of replicated services, meaning that we can scale up / down containers deployed in the swarm.
From the docs again:
Swarm mode has an internal DNS component that automatically assigns each service in the swarm a DNS entry. The swarm manager uses internal load balancing to distribute requests among services within the cluster based upon the DNS name of the service.
So you can have deployed like 10 exact same container (let's say nginx with you app html/js), without dealing with private network DNS entries, port configuration, etc... Any incoming request will be automatically load balanced to hosts participating in the swarm.
Hope this helps!

Kubernetes: Using UDP broadcast to find other pods

I have a clustered legacy application that I am trying to deploy on kubernetes. The nodes in the cluster find each other using UDP broadcast. I cannot change this behaviour for various reasons.
When deployed on docker, this would be done by creating a shared network (i.e. docker network create --internal mynet, leading to a subnet e.g. 172.18.0.0/16), and connecting the containers containing the clustered nodes to the same network (docker network connect mynet instance1 and docker network connect mynet instance2). Then every instance starting would broadcast it's IP address periodically on this network using 172.18.255.255 until they have formed a cluster. Multiple such clusters could reside in the same kubernetes namespace, so preferrably I would like to create my own "private network" just for these pods to avoid port collisions.
Is there a way of creating such a network on kubernetes, or otherwise trick the application into believing it is connected to such a network (assuming the IP addresses of the other nodes are known)? The kubernetes cluster I am running on uses Calico.
Maybe you can Set label for your app pod, and try NetworkPolicy on Calico.

How to expose a service in kubernetes running on Barematel

Kubernetes Version: 1.10 Running on Barematel
No. of masters: 3
We are running our multiple microservices inside a Kubernetes cluster. Currently, we are exposing these services outside of the cluster using NodePort. Each microservice has it's own NodePort so we have to maintain a list with the corresponding microservices. Since we are running on Barematel we don't have features like LodeBalancer while exposing a microservice.
Problem: - Since we have multiple masters and workers inside the cluster we have to use a static IP or DNS for any master at a time. If I want to access any service from outside the cluster I have to use as - IP_ADDRESS:NODEPORT or DNS:NODEPORT. At a time I can use the address of any one master. If that master goes gown then I have to change microservices address with other master's address. I don't want to use a static IP or DNS of any master.
What could we a better way to expose these microservices without NodePort? Is there any feature like LoadBalancer over Baremetal? Can INGRESS or Nginx help us?
There is a LoadBalancer for Baremetal, it's called METALLB. Project is available on GitHub, unfortunately this solution is in alpha state and is more complex.
You can also follow the instructions from NGINX and setup round-robin method for TCP or UDP.
Ingress only supports http(s) over ports 80, 443 only.
You can of course setup your own ingress controller but it will be a lot of extra work.
NodePort downside is a limited number of usable ports which is from 30000 to 32767, and if IP of the machine changes your services will be inaccessible.

Resources