Can some one provide References/Basic Idea how communication is done between pods in different clusters.
Suppose Cluster A has Pod A and Cluster B has Pod B. So how we can ensure Pod A can communicate with Pod B using yaml? -Thanks in Advance
Posting this answer as a community wiki for the better visibility and to add some additional resources as the solution was posted in the comments by user #David Maze:
If the pods are in different clusters, they can't directly communicate with each other (without using NodePort or LoadBalancer services, or otherwise making the destination service accessible from outside its own cluster).
With the most common setups the way to communicate Pod1 from Cluster1 with Pod2 with Cluster2 would be to use:
Service of type NodePort
Service of type LoadBalancer
Ingress resource - specific to HTTP/HTTPS traffic
All of the above solutions will heavily depend on where your Kubernetes cluster is deployed.
For example:
With cloud solutions like GKE, AKS, EKS you can use service type of LoadBalancer or Ingress resource to direct the traffic to your pod.
With bare metal solution you would need to use additional tools like MetalLB to use service of type LoadBalancer
You could also look on this resources:
Istio.io: Install: Multicluster: Gateways
Istio.io: Blog: Multi-cluster mesh automation
As for an example assume that you have 2 Kubernetes clusters that can expose traffic with service of type LoadBalancer.
Apply on first cluster:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Check the EXTERNAL-IP associated with the service:
$ kubectl get service nginx-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service LoadBalancer 10.92.10.48 A.B.C.D 80:30994/TCP 26s
Switch to second cluster and run:
$ kubectl run -it ubuntu --image=ubuntu -- /bin/bash
$ apt update && apt install curl
$ curl A.B.C.D
You should be able to see:
<--- REDACTED --->
<p><em>Thank you for using nginx.</em></p>
<--- REDACTED --->
Additional resources:
Kubernetes.io: Concepts: Services
Medium.com: Kubernetes NodePort vs LoadBalancer vs Ingress, when I should use what - could be somewhat specific to GKE
I'm facing a weird issue on setting up egress network policy on my kube cluster.
Basically I want my pod A to access only pod B.
I have two pods:
hello-k8s-deploy
nginx
The hello-k8s-deploy pod expose an API on port 8080 via NodePort.
My nginx pod is simply an image to access the API.
So let's try logging in to the nginx pod and access that API exposed by the hello-k8s-deploy pod.
Above shows that the API responded back with message starts with Hello K8s!
Now let's apply the network policy on my nginx pod so it can access only this API, nothing else.
Network policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: app
spec:
podSelector:
matchLabels:
run: nginx
egress:
- to:
- podSelector:
matchLabels:
app: hello-k8s-deploy
policyTypes:
- Egress
Above policy will be applied to pod with label run: nginx
And the rule is allow traffic to pod with label app: hello-k8s-deploy
Let's validate it by looking at the definition of both of the pods nginx and hello-k8s-deploy
nginx:
hello-k8s-deploy
As we can see both labels are matching the Network policy.
After I applied the network policy and access the nginx again I expect to work the same and get a response from the API but I'm getting the below error.
Take note that:
All of the resources are in the same namespace app
My network addon is weave-net which has support for network policy as per documentation.
I even tried to specify the namespace selector and add port 8080.
I finally resolved the issue, basically the problem I was getting is could not resolve host hello-k8s-svc. It means k8s is trying to connect using this host and resolving through dns name (service name).
And since my pod is only allowing egress to hello-k8s-deploy, it's failing as it also needs to connect to kube-dns for resolving the dns. So before you apply an egress make sure the pod or all pods in your namespace are allowing to connect to kube-dns for dns resolution.
The fix is simply creating an egress resource to all pods to connect to kube-dns on top of your pod specific egress configuration.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
spec:
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
networking/namespace: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
policyTypes:
- Egress
In my case I labeled the kube-system namespace:
kubectl label namespace kube-system networking/namespace=kube-system
I am new to Kubernetes. I followed Kubernetes the hard way from Kesley Hightower and also this to set up Kubernetes in Azure. Now all the services are up and running fine. But I am not able to expose the traffic using Load balancer. I tried to add a Service object of type LoadBalancer but the external IP is showing as <pending>. I need to add ingress to expose the traffic.
nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-service
name: nginx-service
spec:
type: LoadBalancer
externalIPs:
- <ip>
ports:
- name: "80"
port: 80
targetPort: 80
- name: "443"
port: 443
targetPort: 443
selector:
app: nginx-service
Thank you,
By default, the solution proposed by Kubernetes The Hard Way doesn't include a solution for LoadBalancer. The fact it's pending forever is expected behavior. You need to use out-of-the box solution for that. A very commonly used is MetalLB.
MetalLB isn't going to allocate an External IP for you, it will allocate a internal IP inside our VPC and you have to create the necessary routing rules to route traffic to this IP.
Currently I am having an issue with one of my services set to be a load balancer. I am trying to get the source ip preservation like its stated in the docs. However when I set the externalTrafficPolicy to local I lose all traffic to the service. Is there something I'm missing that is causing this to fail like this?
Load Balancer Service:
apiVersion: v1
kind: Service
metadata:
labels:
app: loadbalancer
role: loadbalancer-service
name: lb-test
namespace: default
spec:
clusterIP: 10.3.249.57
externalTrafficPolicy: Local
ports:
- name: example service
nodePort: 30581
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: loadbalancer-example
role: example
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: *example.ip*
Could be several things. A couple of suggestions:
Your service is getting an external IP and doesn't know how to reply back based on the local IP address of the pod.
Try running a sniffer on your pod see if you are getting packets from the external source.
Try checking at logs of your application.
Healthcheck in your load balancer is failing. Check the load balancer for your service on GCP console.
Check the instance port is listening. (probably not if your health check is failing)
Hope it helps.
EDIT: The whole point of my setup is to achieve (if possible) the following :
I have multiple k8s nodes
When I contact an IP address (from my company's network), it should be routed to one of my container/pod/service/whatever.
I should be able to easily setup that IP (like in my service .yml definition)
I'm running a small Kubernetes cluster (built with kubeadm) in order to evaluate if I can move my Docker (old)Swarm setup to k8s. The feature I absolutely need is the ability to assign IP to containers, like I do with MacVlan.
In my current docker setup, I'm using MacVlan to assign IP addresses from my company's network to some containers so I can reach directly (without reverse-proxy) like if it's any physical server. I'm trying to achieve something similar with k8s.
I found out that:
I have to use Service
I can't use the LoadBalancer type, as it's only for compatible cloud providers (like GCE or AWS).
I should use ExternalIPs
Ingress Resources are some kind of reverse proxy ?
My yaml file is :
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/hostname: k8s-slave-3
---
kind: Service
apiVersion: v1
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- A.B.C.D
I was hopping that my service would get the IP A.B.C.D (which is one of my company's network). My deployment is working as I can reach my nginx container from inside the k8s cluster using it's ClusterIP.
What am I missing ? Or at least, where can I find informations on my network traffic in order to see if packets are coming ?
EDIT :
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.96.0.1 <none> 443/TCP 6d
nginx-service 10.102.64.83 A.B.C.D 80/TCP 23h
Thanks.
First of all run this command:
kubectl get -n namespace services
Above command will return output like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 <none> 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 <none> 3000:30017/TCP 13h
It is clear from the above output that External IPs are not assigned to the services yet. To assign External IPs to backend service run the following command.
kubectl patch svc backend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
and to assign external IP to frontend service run this command.
kubectl patch svc frontend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
Now get namespace service to check either external IPs assignment:
kubectl get -n namespace services
We get an output like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 192.168.0.194 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 192.168.0.194 3000:30017/TCP 13h
Cheers!!! Kubernetes External IPs are now assigned .
If this is just for testing, then try
kubectl port-forward service/nginx-service 80:80
Then you can
curl http://localhost:80
A solution that could work (and not only for testing, though it has its shortcomings) is to set your Pod to map the host network with the hostNetwork spec field set to true.
It means that you won't need a service to expose your Pod, as it will always be accessible on your host via a single port (the containerPort you specified in the manifest). No need to keep a DNS mapping record in that case.
This also means that you can only run a single instance of this Pod on a given node (talking about shortcomings...). As such, it makes it a good candidate for a DaemonSet object.
If your Pod still needs to access/resolve internal Kubernetes hostnames, you need to set the dnsPolicy spec field set to ClusterFirstWithNoHostNet. This setting will enable your pod to access the K8S DNS service.
Example:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx-reverse-proxy
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
tolerations: # allow a Pod instance to run on Master - optional
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- image: nginx
name: nginx
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
EDIT: I was put on this track thanks to the the ingress-nginx documentation
You can just Patch an External IP
CMD: $ kubectl patch svc svc_name -p '{"spec":{"externalIPs":["your_external_ip"]}}'
Eg:- $ kubectl patch svc kubernetes -p '{"spec":{"externalIPs":["10.2.8.19"]}}'
you can try kube-keepalived-vip configurtion to route the traffic. https://github.com/kubernetes/contrib/tree/master/keepalived-vip
You can try to add "type: NodePort" in your yaml file for the service and then you'll have a port to access it via the web browser or from the outside. For my case, it helped.
I don't know if that helps in your particular case but what I did (and I'm on a Bare Metal cluster) was to use the LoadBalancer and set the loadBalancerIP as well as the externalIPs to my server IP as you did it.
After that the correct external IP showed up for the load balancer.
Always use the namespace flag either before or after the service name, because Namespace-based scoping is applicable for deployments and services and this points out to the service that is tagged to a specific namespace. kubectl patch svc service-name -n namespace -p '{"spec":{"externalIPs":["IP"]}}'
Just include additional option.
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service --external-ip=1.1.1.1