I am trying to setup connection to my databases which reside outside of GKE cluster from within the cluster.
I have read various tutorials including
https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services
and multiple SO questions though the problem persists.
Here is an example configuration with which I am trying to setup kafka connectivity:
---
kind: Endpoints
apiVersion: v1
metadata:
name: kafka
subsets:
- addresses:
- ip: 10.132.0.5
ports:
- port: 9092
---
kind: Service
apiVersion: v1
metadata:
name: kafka
spec:
type: ClusterIP
ports:
- port: 9092
targetPort: 9092
I am able to get some sort of response by connecting directly via nc 10.132.0.5 9092 from the node VM itself, but if I create a pod, say by kubectl run -it --rm --restart=Never alpine --image=alpine sh then I am unable to connect from within the pod using nc kafka 9092. All libraries in my code fail by timing out so it seems to be some kind of routing issue.
Kafka is given as an example, I am having the same issues connecting to other databases as well.
Solved it, the issue was within my understanding of how GCP operates.
To solve the issue I had to add a firewall rule which allowed all incoming traffic from internal GKE network. In my case it was 10.52.0.0/24 address range.
Hope it helps someone.
Related
I'm using GKE version 1.21.12-gke.1700 and I'm trying to configure externalTrafficPolicy to "local" on my nginx external load balancer (not ingress). After the change, nothing happens, and I still see the source as the internal IP for the kubernetes IP range instead of the client's IP.
This is my service's YAML:
apiVersion: v1
kind: Service
metadata:
name: nginx-ext
namespace: my-namespace
spec:
externalTrafficPolicy: Local
healthCheckNodePort: xxxxx
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
loadBalancerSourceRanges:
- x.x.x.x/32
ports:
- name: dashboard
port: 443
protocol: TCP
targetPort: 443
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
And the nginx logs:
*2 access forbidden by rule, client: 10.X.X.X
My goal is to make a restriction endpoint based (to deny all and allow only specific clients)
You can use curl to query the ip from the load balance, this is an example curl 202.0.113.120 . Please notice that the service.spec.externalTrafficPolicy set to Local in GKE will force to remove the nodes without service endpoints from the list of nodes eligible for load balanced traffic; so if you are applying the Local value to your external traffic policy, you will have at least one Service Endpoint. So based on this, it is important to deploy the service.spec.healthCheckNodePort . This port needs to be allowed in the ingress firewall rule, you can get the health check node port from your yaml file with this command:
kubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort
You can follow this guide if you need more information about how the service load balancer type works in GKE and finally you can limit the traffic from outside at your external load balancer deploying loadBalancerSourceRanges. In the following link, you can find more information related on how to protect your applications from outside traffic.
Scenario:
I have Four(4) Pods, Payroll,internal,external,mysql.
I want internal pod to only access:
a. Internal > Payroll on port 8080
b. Internal > mysql on port 3306
Kindly suggest what is missing part? I made the below network policy. But my pod is unable to communicate with 'any' pod.
Hence it has achieved the given target, but practically unable to access other pods. Below are my network policy details.
master $ k describe netpol/internal-policy
Name: internal-policy
Namespace: default
Created on: 2020-02-20 02:15:06 +0000 UTC
Labels: <none>
Annotations: <none>
Spec:
PodSelector: name=internal
Allowing ingress traffic:
<none> (Selected pods are isolated for ingress connectivity)
Allowing egress traffic:
To Port: 8080/TCP
To:
PodSelector: name=payroll
----------
To Port: 3306/TCP
To:
PodSelector: name=mysql
Policy Types: Egress
Policy YAML
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
namespace: default
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
name: payroll
ports:
- protocol: TCP
port: 8080
- to:
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 3306 (edited)
Hence it has achieved the given target, but practically unable to
access other pods.
If I understand you correctly you achieved the goal defined in your network policy and all your Pods wearing label name: internal are currently able to communicate both with payroll (on port 8080) and mysql (on port 3306) Pods, right ?
Please correct me if I'm wrong but I see some contradiction in your statement. On the one hand you want your internal Pods to be able to communicate only with very specific set of Pods and connect with them using specified ports:
I want internal pod to only access:
a. Internal > Payroll on port 8080
b. Internal > mysql on port 3306
and on the other, you seem surprised that they can't access any other Pods:
Hence it has achieved the given target, but practically unable to
access other pods.
Keep in mind that when you apply some NetworkPolicy rule on specific set of Pods, at the same time the dafault deny all rule is implicitly applied on selected Pods (unless you decide to reconfigure the default policy to make it work the way you want).
As you can read here:
Pods become isolated by having a NetworkPolicy that selects them. Once
there is any NetworkPolicy in a namespace selecting a particular pod,
that pod will reject any connections that are not allowed by any
NetworkPolicy. (Other pods in the namespace that are not selected by
any NetworkPolicy will continue to accept all traffic.)
The above applies also to egress rules.
If currently your internal Pods have access only to payroll and mysql Pod on specified ports, everything works as it is supposed to work.
If you are interested in denying all other traffic to your payroll and mysql Pods, you should apply an ingress rule on those Pods rather than defining egress on Pods which are supposed to communicate with them, but at the same time they should not be deprived of the ability to communicate with other Pods.
Please let me know if it helped. If something is not clear or my assumption was wrong, also please let me know and don't hesitate to ask additional questions.
I installed Minikube v1.3.1 on my RedHat EC2 instance for some tests.
Since the ports that the nginx-ingress-controller uses by default are already in use, I am trying to change them in the deployment but without result. Could please somebody advise how to do it?
How do I know that the port are already in Use?
When I listed the system pods using the command kubectl -n kube-system get deployment | grep nginx, I get:
nginx-ingress-controller 0/1 1 0 9d
meaning that my container is not up. When I describe it using the command kubectl -n kube-system describe pod nginx-ingress-controller-xxxxx I get:
Type Reason Age From
Message ---- ------ ----
---- ------- Warning FailedCreatePodSandBox 42m (x163507 over 2d1h) kubelet, minikube (combined from similar
events): Failed create pod sandbox: rpc error: code = Unknown desc =
failed to start sandbox container for pod
"nginx-ingress-controller-xxxx": Error response from daemon: driver
failed programming external connectivity on endpoint
k8s_POD_nginx-ingress-controller-xxxx_kube-system_...: Error starting
userland proxy: listen tcp 0.0.0.0:443: bind: address already in use
Then I check the processes using those ports and I kill them. That free them up and the ingress-controller pod gets deployed correctly.
What did I try to change the nginx-ingress-controller port?
kubectl -n kube-system get deployment | grep nginx
> NAME READY UP-TO-DATE AVAILABLE AGE
> nginx-ingress-controller 0/1 1 0 9d
kubectl -n kube-system edit deployment nginx-ingress-controller
The relevant part of my deployment looks like this:
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- containerPort: 81
hostPort: 81
protocol: TCP
- containerPort: 444
hostPort: 444
protocol: TCP
- containerPort: 18080
hostPort: 18080
protocol: TCP
Then I remove the subsections with port 443 and 80, but when I rollout the changes, they get added again.
Now my services are not reachable anymore through ingress.
Please note that minikube ships with addon-manager, which role is to keep an eye on specific addon template files (default location: /etc/kubernetes/addons/) and do one of two specific actions based on the label's value of managed resource:
addonmanager.kubernetes.io/mode
addonmanager.kubernetes.io/mode=Reconcile
Will be periodically reconciled. Direct manipulation to these addons
through apiserver is discouraged because addon-manager will bring
them back to the original state. In particular
addonmanager.kubernetes.io/mode=KeepOnly
Will be checked for existence only. Users can edit these addons as
they want.
So to keep your customized version of default Ingress service listening ports, please change first the Ingress deployment template configuration to KeepOnly on minikube VM.
Basically, minikube bootstraps Nginx Ingress Controller as the separate addon, thus as per design you might have to enable it in order to propagate the particular Ingress Controller's resources within minikube cluster.
Once you enabled some specific minikube Addon, Addon-manager creates template files for each component by placing them into /etc/kubernetes/addons/ folder on the host machine, and then spin up each manifest file, creating corresponded K8s resources; furthermore Addon-manager continuously inspects the actual state for all addon resources synchronizing K8s target resources (service, deployment, etc.) according to the template data.
Therefore, you can consider modifying Ingress addon template data throughout ingress-*.yaml files under /etc/kubernetes/addons/ directory, propagating the desired values into the target k8s objects; it may takes some until K8s engine reflects the changes and re-spawns the relative ReplicaSet based resources.
Well, I think you have to modify the Ingress which refer to the service you're trying to expose on custom port.
This can be done with custom annotation. Here is an example for your port 444:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/listen-ports-ssl: "444"
spec:
tls:
- hosts:
- host.org
secretName: my-host-tls-cert
rules:
- host: host.org
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 444
I had a couple of questions regarding kubernetes ingress service [/controllers]
For example I have an nginx frontend image that I am trying to run with kubectl -
kubectl run <deployment> --image <repo> --port <internal-nginx-port>.
Now I tried to expose this to the outer world with a service -
kubectl expose deployment <deployment> --target-port <port>.
Then tried to create an ingress service with the following nignx-ing.yaml -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: urtutorsv2ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "coreos"
spec:
backend:
serviceName: <service>
servicePort: <port>
Where my ingress.global-static-ip-name is correctly created & available
in Google cloud console.
[I am assuming the service port here is the port I want on my "coreos" IP , so I set it to 80 initially which didn't work so I tried setting it same as the specified in the first step but it still didn't work.]
So, the issue is I am not able to access the frontend at both the urls
http://COREOS_IP, http://COREOS_IPIP:
Which is why I tried to use -
kubectl expose deployment <deployment> --target-port <port>. --type NodePort
to see if it worked with a NodePort & I was able to access the frontend.
So, I am thinking there might be a configuration mistake here because of which I am not getting results with the ingress.
Can anyone here help debug / fix the issue ?
Yeah, the service is there. I tried to check the status with - kubectl get services, kubectl describe service k8urtutorsv2. It showed the service. I tried editing it & saved the nodeport value. the thing is it works with nodeport but not 80 or 443.
You cannot directly expose service on the port 80 or 443.
The available range of exposed services is predefined in the kube-api configuration by the service-node-port-range option with the default value 30000-32767.
I have set up a working k8s cluster.
Each node of the cluster is inside network 10.11.12.0/24 (physical network). Over this network is running a flanneld (canal) cni.
Each node has another network interface (not managed by k8s) with cidr 192.168.0.0/24
When I deploy a service like:
kind: Service
apiVersion: v1
metadata:
name: my-awesome-webapp
spec:
selector:
server: serverA
ports:
- protocol: TCP
port: 80
targetPort: 8080
externalTrafficPolicy: Local
type: LoadBalancer
externalIPs:
- 192.168.0.163
The service is accessible at http://192.168.0.163, but the Pod receives source ip: 192.168.0.163 eth0 address of the server: not my source ip (192.168.0.94).
Deployment consists of 2 pods with the same spec.
Is possible to Pods to view my source ip m?
Anyone knows how to manage it? externalTrafficPolicy: Local seems not working.
Kubernetes change the source IP with the cluster/node IPs for which the details information can be found on this document. Kubernetes has a feature to preserve the client source IP which I believe you already are already aware.
Seems like a this is a bug in Kubernetes and there is already an open bug for this issue of below command not working properly.
externalTrafficPolicy: Local
I suggest to post on the bug to get more attention on the issue.