Scenario:
I have Four(4) Pods, Payroll,internal,external,mysql.
I want internal pod to only access:
a. Internal > Payroll on port 8080
b. Internal > mysql on port 3306
Kindly suggest what is missing part? I made the below network policy. But my pod is unable to communicate with 'any' pod.
Hence it has achieved the given target, but practically unable to access other pods. Below are my network policy details.
master $ k describe netpol/internal-policy
Name: internal-policy
Namespace: default
Created on: 2020-02-20 02:15:06 +0000 UTC
Labels: <none>
Annotations: <none>
Spec:
PodSelector: name=internal
Allowing ingress traffic:
<none> (Selected pods are isolated for ingress connectivity)
Allowing egress traffic:
To Port: 8080/TCP
To:
PodSelector: name=payroll
----------
To Port: 3306/TCP
To:
PodSelector: name=mysql
Policy Types: Egress
Policy YAML
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
namespace: default
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
name: payroll
ports:
- protocol: TCP
port: 8080
- to:
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 3306 (edited)
Hence it has achieved the given target, but practically unable to
access other pods.
If I understand you correctly you achieved the goal defined in your network policy and all your Pods wearing label name: internal are currently able to communicate both with payroll (on port 8080) and mysql (on port 3306) Pods, right ?
Please correct me if I'm wrong but I see some contradiction in your statement. On the one hand you want your internal Pods to be able to communicate only with very specific set of Pods and connect with them using specified ports:
I want internal pod to only access:
a. Internal > Payroll on port 8080
b. Internal > mysql on port 3306
and on the other, you seem surprised that they can't access any other Pods:
Hence it has achieved the given target, but practically unable to
access other pods.
Keep in mind that when you apply some NetworkPolicy rule on specific set of Pods, at the same time the dafault deny all rule is implicitly applied on selected Pods (unless you decide to reconfigure the default policy to make it work the way you want).
As you can read here:
Pods become isolated by having a NetworkPolicy that selects them. Once
there is any NetworkPolicy in a namespace selecting a particular pod,
that pod will reject any connections that are not allowed by any
NetworkPolicy. (Other pods in the namespace that are not selected by
any NetworkPolicy will continue to accept all traffic.)
The above applies also to egress rules.
If currently your internal Pods have access only to payroll and mysql Pod on specified ports, everything works as it is supposed to work.
If you are interested in denying all other traffic to your payroll and mysql Pods, you should apply an ingress rule on those Pods rather than defining egress on Pods which are supposed to communicate with them, but at the same time they should not be deprived of the ability to communicate with other Pods.
Please let me know if it helped. If something is not clear or my assumption was wrong, also please let me know and don't hesitate to ask additional questions.
Related
I'm using GKE version 1.21.12-gke.1700 and I'm trying to configure externalTrafficPolicy to "local" on my nginx external load balancer (not ingress). After the change, nothing happens, and I still see the source as the internal IP for the kubernetes IP range instead of the client's IP.
This is my service's YAML:
apiVersion: v1
kind: Service
metadata:
name: nginx-ext
namespace: my-namespace
spec:
externalTrafficPolicy: Local
healthCheckNodePort: xxxxx
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
loadBalancerSourceRanges:
- x.x.x.x/32
ports:
- name: dashboard
port: 443
protocol: TCP
targetPort: 443
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
And the nginx logs:
*2 access forbidden by rule, client: 10.X.X.X
My goal is to make a restriction endpoint based (to deny all and allow only specific clients)
You can use curl to query the ip from the load balance, this is an example curl 202.0.113.120 . Please notice that the service.spec.externalTrafficPolicy set to Local in GKE will force to remove the nodes without service endpoints from the list of nodes eligible for load balanced traffic; so if you are applying the Local value to your external traffic policy, you will have at least one Service Endpoint. So based on this, it is important to deploy the service.spec.healthCheckNodePort . This port needs to be allowed in the ingress firewall rule, you can get the health check node port from your yaml file with this command:
kubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort
You can follow this guide if you need more information about how the service load balancer type works in GKE and finally you can limit the traffic from outside at your external load balancer deploying loadBalancerSourceRanges. In the following link, you can find more information related on how to protect your applications from outside traffic.
I'm facing a weird issue on setting up egress network policy on my kube cluster.
Basically I want my pod A to access only pod B.
I have two pods:
hello-k8s-deploy
nginx
The hello-k8s-deploy pod expose an API on port 8080 via NodePort.
My nginx pod is simply an image to access the API.
So let's try logging in to the nginx pod and access that API exposed by the hello-k8s-deploy pod.
Above shows that the API responded back with message starts with Hello K8s!
Now let's apply the network policy on my nginx pod so it can access only this API, nothing else.
Network policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: app
spec:
podSelector:
matchLabels:
run: nginx
egress:
- to:
- podSelector:
matchLabels:
app: hello-k8s-deploy
policyTypes:
- Egress
Above policy will be applied to pod with label run: nginx
And the rule is allow traffic to pod with label app: hello-k8s-deploy
Let's validate it by looking at the definition of both of the pods nginx and hello-k8s-deploy
nginx:
hello-k8s-deploy
As we can see both labels are matching the Network policy.
After I applied the network policy and access the nginx again I expect to work the same and get a response from the API but I'm getting the below error.
Take note that:
All of the resources are in the same namespace app
My network addon is weave-net which has support for network policy as per documentation.
I even tried to specify the namespace selector and add port 8080.
I finally resolved the issue, basically the problem I was getting is could not resolve host hello-k8s-svc. It means k8s is trying to connect using this host and resolving through dns name (service name).
And since my pod is only allowing egress to hello-k8s-deploy, it's failing as it also needs to connect to kube-dns for resolving the dns. So before you apply an egress make sure the pod or all pods in your namespace are allowing to connect to kube-dns for dns resolution.
The fix is simply creating an egress resource to all pods to connect to kube-dns on top of your pod specific egress configuration.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
spec:
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
networking/namespace: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
policyTypes:
- Egress
In my case I labeled the kube-system namespace:
kubectl label namespace kube-system networking/namespace=kube-system
Currently I am having an issue with one of my services set to be a load balancer. I am trying to get the source ip preservation like its stated in the docs. However when I set the externalTrafficPolicy to local I lose all traffic to the service. Is there something I'm missing that is causing this to fail like this?
Load Balancer Service:
apiVersion: v1
kind: Service
metadata:
labels:
app: loadbalancer
role: loadbalancer-service
name: lb-test
namespace: default
spec:
clusterIP: 10.3.249.57
externalTrafficPolicy: Local
ports:
- name: example service
nodePort: 30581
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: loadbalancer-example
role: example
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: *example.ip*
Could be several things. A couple of suggestions:
Your service is getting an external IP and doesn't know how to reply back based on the local IP address of the pod.
Try running a sniffer on your pod see if you are getting packets from the external source.
Try checking at logs of your application.
Healthcheck in your load balancer is failing. Check the load balancer for your service on GCP console.
Check the instance port is listening. (probably not if your health check is failing)
Hope it helps.
I have a Kubernetes cluster with a master node and two other nodes:
sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 4h v1.10.2
kubernetes-node1 Ready <none> 4h v1.10.2
kubernetes-node2 Ready <none> 34m v1.10.2
Each of them is running on a VirtualBox Ubuntu VM, accessible from the guest computer:
kubernetes-master (192.168.56.3)
kubernetes-node1 (192.168.56.4)
kubernetes-node2 (192.168.56.6)
I deployed an nginx server with two replicas, having one pod per kubernetes-node-x:
sudo kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deployment-64ff85b579-5k5zh 1/1 Running 0 8s 192.168.129.71 kubernetes-node1
nginx-deployment-64ff85b579-b9zcz 1/1 Running 0 8s 192.168.22.66 kubernetes-node2
Next I expose a service for the nginx-deployment as a NodePort to access it from outside the cluster:
sudo kubectl expose deployment/nginx-deployment --type=NodePort
sudo kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h
nginx-deployment NodePort 10.96.194.15 <none> 80:32446/TCP 2m
sudo kubectl describe service nginx-deployment
Name: nginx-deployment
Namespace: default
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: NodePort
IP: 10.96.194.15
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32446/TCP
Endpoints: 192.168.129.72:80,192.168.22.67:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I can access each pod in a node directly using their node IP
kubernetes-node1 http://192.168.56.4:32446/
kubernetes-node2 http://192.168.56.6:32446/
But, I thought that K8s provided some kind of external cluster ip that balanced the requests to the nodes from the outside. What is that IP??
But, I thought that K8s provided some kind of external cluster ip that balanced the requests to the nodes from the outside. What is that IP??
Cluster IP is internal to Cluster. Not exposed to outside, it is for intercommunication across the cluster.
Indeed, you have LoadBanacer type of service that can do such a trick that you need, only it is dependent on cloud providers or minikube/docker edge to work properly.
I can access each pod in a node directly using their node IP
Actually you don't access them individually that way. NodePort does a bit different trick, since it is essentially loadbalancing requests from outside on ANY exposed node IP. In a nutshell, if you hit any of node's IPs with exposed NodePort, kube-proxy will make sure that required service gets it and then service is doing round-robin through active pods, so although you hit specific node IP, you don't necessarily get pod running on that specific node. More details on that you can find here: https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0, as author there said, not technically most accurate representation, but attempt to show on logical level what is happening with NodePort exposure:
As a sidenote, in order to do this on bare metal and do ssl or such, you need to provision ingress of your own. Say, place one nginx on specific node and then reference all appropriate services you want exposed (mind fqdn for service) as upstream(s) that can run on multiple nodes with as many nginx of their own as desired - you don't need to handle exact details of that since k8s runs the show. That way you have one node point (ingress nginx) with known IP address that is handling incoming traffic and redirecting it to services inside k8s that can run across any node(s). I suck with ascii art but will give it a try:
(outside) -> ingress (nginx) +--> my-service FQDN (running accross nodes):
[node-0] | [node-1]: my-service-pod-01 with nginx-01
| [node 2]: my-service-pod-02 with nginx-02
| ...
+--> my-second-service FQDN
| [node-1]: my-second-service-pod with apache?
...
In above sketch you have nginx ingress on node-0 (known IP) that takes external traffic and then handles my-service (running on two pods on two nodes) and my-second-service (single pod) as upstreams. You only need to expose FQDN on services for this to work without worrying about details of IPs of specific nodes. More info you can find in documentation: https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/
Also way better than my ansi-art is this representation from same link as in previous point that illustrate idea behind ingress:
Updated for comments
Why isn't the service load balancing the used pods from the service?
This can happen for several reasons. Depending on how your Liveness and Readiness Probes are configured, maybe service still don't see pod as out of service. Due to this async nature in distributed system such as k8s we experience temporary lost of requests when pods get removed during, for example rolling updates and similar. Secondly, depending on how your kube-proxy was configured, there are options where you can limit it. By official documentation (https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) using --nodeport-addresses you can change node-proxy behavior. Turns out that round-robin was old kube-proxy behavior, apparently new one should be random. Finally, to exclude connection and session issues from browser, did you try this from anonymous session as well? Do you have dns cached locally maybe?
Something else, I killed the pod from node1, and when calling to node1 it didn't use the pod from node 2.
This is a bit strange. Might be related to above mentioned probes thoug. According to official documentation this should not be the case. We had NodePort behaving inline with official documentation mentined above: and each Node will proxy that port (the same port number on every Node) into your Service. But if that is your case then probably LB or Ingress, maybe even ClusterIP with external address (see below) can do the trick for you.
if the service is internal (ClusterIP) ... does it load balance to any of the pods in the nodes
Most definitely yes. One more thing, you can use this behavior to also expose 'load balanced' behavior in 'standard' port range as opposed to 30k+ from NodePort. Here is excerpt of service manifest we use for ingress controller.
apiVersion: v1
kind: Service
metadata:
namespace: ns-my-namespace
name: svc-nginx-ingress-example
labels:
name: nginx-ingress-example
role: frontend-example
application: nginx-example
spec:
selector:
name: nginx-ingress-example
role: frontend-example
application: nginx-example
ports:
- protocol: TCP
name: http-port
port: 80
targetPort: 80
- protocol: TCP
name: ssl-port
port: 443
targetPort: 443
externalIPs:
- 123.123.123.123
Note that in the above example imaginary 123.123.123.123 that is exposed with externalIPs represents ip address of one of our worker nodes. Pods running in svc-nginx-ingress-example service doesn't need to be on this node at all but they still get the traffic routed to them (and loadbalanced across the pods as well) when that ip is hit on specified port.
I have a Kubernetes cluster running 1.2.3 binaries along with flannel 0.5.5. I am using the GCE backend with IP forwarding enabled. For some reason, although I specify a specific Node's external IP address, it will not forward to the appropriate node.
Additionally I cannot create external load balancers, which the controller-manager says it can't find the gce instances that are the nodes, which are in ready state. I've looked at the source where the load balancer creation happens, my guess is it's either permission issues (I gave the cluster full permissions for gce) or it's not finding the metadata.
Here is an example of the services in question:
kind: "Service"
apiVersion: "v1"
metadata:
name: "client"
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
name: "insecure"
- protocol: "TCP"
port: 443
targetPort: 443
name: "secure"
selector:
name: "client"
sessionAffinity: "ClientIP"
externalIPs:
- "<Node External IP>"
And when I was trying to create the load balancer, it had the type: LoadBalancer.
Why would the forwarding to the Node IP not work? I have an idea as to the Load balancer issue, but if anyone has any insight?
So I eventually worked around this issue by creating an external loadbalancer. Only then did I have a valid external IP.