Communication Between Pods in Different Cluster in K8s using yaml - networking

Can some one provide References/Basic Idea how communication is done between pods in different clusters.
Suppose Cluster A has Pod A and Cluster B has Pod B. So how we can ensure Pod A can communicate with Pod B using yaml? -Thanks in Advance

Posting this answer as a community wiki for the better visibility and to add some additional resources as the solution was posted in the comments by user #David Maze:
If the pods are in different clusters, they can't directly communicate with each other (without using NodePort or LoadBalancer services, or otherwise making the destination service accessible from outside its own cluster).
With the most common setups the way to communicate Pod1 from Cluster1 with Pod2 with Cluster2 would be to use:
Service of type NodePort
Service of type LoadBalancer
Ingress resource - specific to HTTP/HTTPS traffic
All of the above solutions will heavily depend on where your Kubernetes cluster is deployed.
For example:
With cloud solutions like GKE, AKS, EKS you can use service type of LoadBalancer or Ingress resource to direct the traffic to your pod.
With bare metal solution you would need to use additional tools like MetalLB to use service of type LoadBalancer
You could also look on this resources:
Istio.io: Install: Multicluster: Gateways
Istio.io: Blog: Multi-cluster mesh automation
As for an example assume that you have 2 Kubernetes clusters that can expose traffic with service of type LoadBalancer.
Apply on first cluster:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Check the EXTERNAL-IP associated with the service:
$ kubectl get service nginx-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service LoadBalancer 10.92.10.48 A.B.C.D 80:30994/TCP 26s
Switch to second cluster and run:
$ kubectl run -it ubuntu --image=ubuntu -- /bin/bash
$ apt update && apt install curl
$ curl A.B.C.D
You should be able to see:
<--- REDACTED --->
<p><em>Thank you for using nginx.</em></p>
<--- REDACTED --->
Additional resources:
Kubernetes.io: Concepts: Services
Medium.com: Kubernetes NodePort vs LoadBalancer vs Ingress, when I should use what - could be somewhat specific to GKE

Related

Unable to access the nginx through Kubernetes LoadBalancer service

I'm using k8s provided with docker desktop (windows).
My deployment.yml file is
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-app
spec:
replicas: 1
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx-container
image: nginx:stable-alpine
ports:
- containerPort: 80
and my service yml file is
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: nginx-app
spec:
selector:
app: nginx-app
type: NodePort
ports:
- nodePort: 31000
port: 80
targetPort: 80
all are up and running but I'm unable to access the application.
>curl localhost:31000
curl: (7) Failed to connect to localhost port 31000: Connection refused
>kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-685658ccbf-g84w5 1/1 Running 0 8s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h
service/my-service LoadBalancer 10.96.210.40 localhost 80:31000/TCP 4s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 1/1 1 1 8s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-685658ccbf 1 1 1 8s
Note: created the Inbound/Outbound rule for this 31000 port in windows firewall to make sure it won't block
There are some questions you can try to answer in order to Debug Services:
Does the Service exist?: In your case we see that it does.
Does the Service work by DNS name?: One of the most common ways that clients consume a Service is through a DNS name.
Does the Service work by IP?: Assuming you have confirmed that DNS works, the next thing to test is whether your Service works by its IP address.
Is the Service defined correctly?: You should really double and triple check that your Service is correct and matches your Pod's port. Also:
Is the Service port you are trying to access listed in spec.ports[]?
Is the targetPort correct for your Pods (some Pods use a different port than the Service)?
If you meant to use a numeric port, is it a number (9376) or a string "9376"?
If you meant to use a named port, do your Pods expose a port with the same name?
Is the port's protocol correct for your Pods?
Does the Service have any Endpoints?: Check that the Pods you ran are actually being selected by the Service.
Are the Pods working?: Check again that the Pods are actually working.
Is the kube-proxy working?: Confirm that kube-proxy is running on your Nodes.
Going through the above steps will help you find the cause of this and possible future issues with services.

Kubernetes, access IP outside the cluster

I have a corporate network(10.22..) which hosts a Kubernetes cluster(10.225.0.1). How can I access some VM in the same network but outside the cluster from within the pod in the cluster?
For example, I have a VM with IP 10.22.0.1:30000, which I need to access from a Pod in Kubernetes cluster. I tried to create a Service like this
apiVersion: v1
kind: Service
metadata:
name: vm-ip
spec:
selector:
app: vm-ip
ports:
- name: vm
protocol: TCP
port: 30000
targetPort: 30000
externalIPs:
- 10.22.0.1
But when I do "curl http://vm-ip:30000" from a Pod(kubectl exec -it), it returns "connection refused" error. But it works with "google.com". What are the ways of accessing the external IPs?
You can create an endpoint for that.
Let's go through an example:
In this example, I have a http server on my network with IP 10.128.15.209 and I want it to be accessible from my pods inside my Kubernetes Cluster.
First thing is to create an endpoint. This is going to let me create a service pointing to this endpoint that will redirect the traffic to my external http server.
My endpoint manifest is looking like this:
apiVersion: v1
kind: Endpoints
metadata:
name: http-server
subsets:
- addresses:
- ip: 10.128.15.209
ports:
- port: 80
$ kubectl apply -f http-server-endpoint.yaml
endpoints/http-server configured
Let's create our service:
apiVersion: v1
kind: Service
metadata:
name: http-server
spec:
ports:
- port: 80
targetPort: 80
$ kubectl apply -f http-server-service.yaml
service/http-server created
Checking if our service exists and save it's clusterIP for letter usage:
user#minikube-server:~$$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-server ClusterIP 10.96.228.220 <none> 80/TCP 30m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10d
Now it's time to verify if we can access our service from a pod:
$ kubectl run ubuntu -it --rm=true --restart=Never --image=ubuntu bash
This command will create and open a bash session inside a ubuntu pod.
In my case I'll install curl to be able to check if I can access my http server. You may need install mysql:
root#ubuntu:/# apt update; apt install -y curl
Checking connectivity with my service using clusterIP:
root#ubuntu:/# curl 10.128.15.209:80
Hello World!
And finally using the service name (DNS):
root#ubuntu:/# curl http-server
Hello World!
So, in your specific case you have to create this:
apiVersion: v1
kind: Endpoints
metadata:
name: vm-server
subsets:
- addresses:
- ip: 10.22.0.1
ports:
- port: 30000
---
apiVersion: v1
kind: Service
metadata:
name: vm-server
spec:
ports:
- port: 30000
targetPort: 30000

Gitlab kubernetes integration

I have a custom kubernetes cluster on a serve with public IP and DNS pointing to it (also wildcard).
Gitlab was configured with the cluster following this guide: https://gitlab.touch4it.com/help/user/project/clusters/index#add-existing-kubernetes-cluster
However, after installing Ingress, the ingress endpoint is never detected:
I tried patching the object in k8s, like so
externalIPs: (was empty)
- 1.2.3.4
externalTrafficPolicy: local (was cluster)
I suspect that the problem is empty ingress (scroll to the end) object then calling:
# kubectl get service ingress-nginx-ingress-controller -n gitlab-managed-apps -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-11-20T08:57:18Z"
labels:
app: nginx-ingress
chart: nginx-ingress-1.22.1
component: controller
heritage: Tiller
release: ingress
name: ingress-nginx-ingress-controller
namespace: gitlab-managed-apps
resourceVersion: "3940"
selfLink: /api/v1/namespaces/gitlab-managed-apps/services/ingress-nginx-ingress-controller
uid: c175afcc-0b73-11ea-91ec-5254008dd01b
spec:
clusterIP: 10.107.35.248
externalIPs:
- 1.2.3.4 # (public IP)
externalTrafficPolicy: Local
healthCheckNodePort: 30737
ports:
- name: http
nodePort: 31972
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31746
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
But Gitlab still cant find the ingress endpoint. I tried restarting cluster and Gitlab.
The network inspection in Gitlab always shows this response:
...
name ingress
status installed
status_reason null
version 1.22.1
external_ip null
external_hostname null
update_available false
can_uninstall false
...
Any ideas how to have a working Ingress Endpoint?
GitLab: 12.4.3 (4d477238500) k8s: 1.16.3-00
I had the exact same issue as you, and I finally figured out how to solve it.
The first to understand, is that on bare metal, you can't make it working without using MetalLB, because it calls the required Kubernetes APIs making it accepting the IP address you give to the Service of LoadBalancer type.
So first step is to deploy MetalLB to your cluster.
Then you need to have another machine, running a service like NGiNX or HAproxy or whatever can do some load balancing.
Last but not least, you have to give the Load Balancer machine IP address to MetalLB so that it can assign it to the Service.
Usually MetalLB requires a range of IP addresses, but you can also give one IP address like I did:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: staging-public-ips
protocol: layer2
addresses:
- 1.2.3.4/32
This way, MetalLB will assign the IP address to the Service with type LoadBalancer and Gitlab will finally find the IP address.
WARNING: MetalLB will assign only once an IP address. If you need many Service with type LoadBalancer, you will need many machines running NGiNX/HAproxy and so on and add its IP address in the MetalLB addresses pool.
For your information, I've posted all the technical details to my Gitlab issue here.

How to set external IP for nginx-ingress controller in private cloud kubernetes cluster

I am setting up a kubernetes cluster to run hyperledger fabric apps. My cluster is on a private cloud hence I don't have a load balancer. How do I set an IP address for my nginx-ingress-controller(pending) to expose my services? I think it is interfering with my creation of pods since when I run kubectl get pods, I see very many evicted pods. I am using certmanager which I think also needs IPs.
CA_POD=$(kubectl get pods -n cas -l "app=hlf-ca,release=ca" -o jsonpath="{.items[0].metadata.name}")
This does not create any pods.
nginx-ingress-controller-5bb5cd56fb-lckmm 1/1 Running
nginx-ingress-default-backend-dc47d79c-8kqbp 1/1 Running
The rest take the form
nginx-ingress-controller-5bb5cd56fb-d48sj 0/1 Evicted
ca-hlf-ca-5c5854bd66-nkcst 0/1 Pending 0 0s
ca-postgresql-0 0/1 Pending 0 0s
I would like to create pods from which I can run exec commands like
kubectl exec -n cas $CA_POD -- cat /var/hyperledger/fabric-ca/msp/signcertscert.pem
You are not exposing nginx-controller IP address, but nginx's service via node port. For example:
piVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-controller
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
selector:
app: nginx
In this case you'd be able to reach your service like
curl -v <NODE_EXTERNAL_IP>:30080
To the question, why your pods are in pending state, pls describe misbehaving pods:
kubectl describe pod nginx-ingress-controller-5bb5cd56fb-d48sj
Best approach is to use helm
helm install stable/nginx-ingress

Assign External IP to a Kubernetes Service

EDIT: The whole point of my setup is to achieve (if possible) the following :
I have multiple k8s nodes
When I contact an IP address (from my company's network), it should be routed to one of my container/pod/service/whatever.
I should be able to easily setup that IP (like in my service .yml definition)
I'm running a small Kubernetes cluster (built with kubeadm) in order to evaluate if I can move my Docker (old)Swarm setup to k8s. The feature I absolutely need is the ability to assign IP to containers, like I do with MacVlan.
In my current docker setup, I'm using MacVlan to assign IP addresses from my company's network to some containers so I can reach directly (without reverse-proxy) like if it's any physical server. I'm trying to achieve something similar with k8s.
I found out that:
I have to use Service
I can't use the LoadBalancer type, as it's only for compatible cloud providers (like GCE or AWS).
I should use ExternalIPs
Ingress Resources are some kind of reverse proxy ?
My yaml file is :
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/hostname: k8s-slave-3
---
kind: Service
apiVersion: v1
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- A.B.C.D
I was hopping that my service would get the IP A.B.C.D (which is one of my company's network). My deployment is working as I can reach my nginx container from inside the k8s cluster using it's ClusterIP.
What am I missing ? Or at least, where can I find informations on my network traffic in order to see if packets are coming ?
EDIT :
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.96.0.1 <none> 443/TCP 6d
nginx-service 10.102.64.83 A.B.C.D 80/TCP 23h
Thanks.
First of all run this command:
kubectl get -n namespace services
Above command will return output like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 <none> 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 <none> 3000:30017/TCP 13h
It is clear from the above output that External IPs are not assigned to the services yet. To assign External IPs to backend service run the following command.
kubectl patch svc backend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
and to assign external IP to frontend service run this command.
kubectl patch svc frontend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
Now get namespace service to check either external IPs assignment:
kubectl get -n namespace services
We get an output like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 192.168.0.194 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 192.168.0.194 3000:30017/TCP 13h
Cheers!!! Kubernetes External IPs are now assigned .
If this is just for testing, then try
kubectl port-forward service/nginx-service 80:80
Then you can
curl http://localhost:80
A solution that could work (and not only for testing, though it has its shortcomings) is to set your Pod to map the host network with the hostNetwork spec field set to true.
It means that you won't need a service to expose your Pod, as it will always be accessible on your host via a single port (the containerPort you specified in the manifest). No need to keep a DNS mapping record in that case.
This also means that you can only run a single instance of this Pod on a given node (talking about shortcomings...). As such, it makes it a good candidate for a DaemonSet object.
If your Pod still needs to access/resolve internal Kubernetes hostnames, you need to set the dnsPolicy spec field set to ClusterFirstWithNoHostNet. This setting will enable your pod to access the K8S DNS service.
Example:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx-reverse-proxy
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
tolerations: # allow a Pod instance to run on Master - optional
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- image: nginx
name: nginx
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
EDIT: I was put on this track thanks to the the ingress-nginx documentation
You can just Patch an External IP
CMD: $ kubectl patch svc svc_name -p '{"spec":{"externalIPs":["your_external_ip"]}}'
Eg:- $ kubectl patch svc kubernetes -p '{"spec":{"externalIPs":["10.2.8.19"]}}'
you can try kube-keepalived-vip configurtion to route the traffic. https://github.com/kubernetes/contrib/tree/master/keepalived-vip
You can try to add "type: NodePort" in your yaml file for the service and then you'll have a port to access it via the web browser or from the outside. For my case, it helped.
I don't know if that helps in your particular case but what I did (and I'm on a Bare Metal cluster) was to use the LoadBalancer and set the loadBalancerIP as well as the externalIPs to my server IP as you did it.
After that the correct external IP showed up for the load balancer.
Always use the namespace flag either before or after the service name, because Namespace-based scoping is applicable for deployments and services and this points out to the service that is tagged to a specific namespace. kubectl patch svc service-name -n namespace -p '{"spec":{"externalIPs":["IP"]}}'
Just include additional option.
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service --external-ip=1.1.1.1

Resources