Automatically update Kubernetes resource if another resource is created - nginx

I currently have the following challenge: We are using two ingress controllers in our cloud Kubernetes cluster, a custom Nginx ingress controller, and a cloud ingress controller on the load balancer.
The challenge now is when creating an Nginx-ingress element, that an automatic update on the cloud ingress controller ingress element is triggered. The ingress controller of the cloud provider does not support host specifications like *.example.com, so we have to work around it.
Cloud Provider Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cloudprovider-listener-https
namespace: nginx-ingress-controller
annotations:
kubernetes.io/elb.id: "<loadbalancerid>"
kubernetes.io/elb.port: "<loadbalancerport>"
kubernetes.io/ingress.class: "<cloudprovider>"
spec:
rules:
- host: "customer1.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ingress-nginx-controller
port:
number: 80
property:
ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH
- host: "customer2.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ingress-nginx-controller
port:
number: 80
property:
ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH
- host: "customer3.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ingress-nginx-controller
port:
number: 80
property:
ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH
tls:
- hosts:
- "*.example.com"
secretName: wildcard-cert
Nginx Ingress Config for each Customer
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web
namespace: <namespace>
annotations:
kubernetes.io/ingress.class: nginx
# ... several nginx-ingress annotations
spec:
rules:
- host: "customer<x>.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: <port>
property:
ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH
Currently, the cloud ingress resource is created dynamically by the helm, but triggered externally and the paths are queried by script "kubectl get ing -A" + magic.
Is there a way to monitor Nginx ingresses internally in the cluster and automatically trigger an update of the cloud ingress for new ingress elements?
Or am I going about this completely wrong?
Hope you guys can help.

I'll describe a solution that requires running kubectl commands from within the Pod.
In short, you can use a script to continuously monitor the .metadata.generation value of the ingress resource, and when this value is increased, you can run your "kubectl get ing -A + magic".
The .metadata.generation value is incremented for all changes, except for changes to .metadata or .status.
Below, I will provide a detailed step-by-step explanation.
To check the generation of the web ingress resource, we can use:
### kubectl get ingress <INGRESS_RESOURCE_NAME> -n default --template '{{.metadata.generation}}'
$ kubectl get ingress web -n default --template '{{.metadata.generation}}'
1
To constantly monitor this value, we can create a Bash script:
NOTE: This script compares generation to newGeneration in a while loop to detect any .metadata.generation changes.
$ cat check-script.sh
#!/bin/bash
generation="$(kubectl get ingress web -n default --template '{{.metadata.generation}}')"
while true; do
newGeneration="$(kubectl get ingress web -n default --template '{{.metadata.generation}}')"
if [[ "${generation}" != "${newGeneration}" ]]; then
echo "Modified !!!" # Here you can additionally add "magic"
generation=${newGeneration}
fi
We want to run this script from inside the Pod, so I converted it to ConfigMap which will allow us to mount this script in a volume (see: Using ConfigMaps as files from a Pod):
$ cat check-script-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: check-script
data:
checkScript.sh: |
#!/bin/bash
generation="$(kubectl get ingress web -n default --template '{{.metadata.generation}}')"
while true; do
newGeneration="$(kubectl get ingress web -n default --template '{{.metadata.generation}}')"
if [[ "${generation}" != "${newGeneration}" ]]; then
echo "Modified !!!"
generation=${newGeneration}
fi
done
$ kubectl apply -f check-script-configmap.yml
configmap/check-script created
For security reasons, I've created a separate ingress-checker ServiceAccount with the view Role assigned and our Pod will run under this ServiceAccount:
NOTE: I've created a Deployment instead of a single Pod.
$ cat all-in-one.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-checker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-checker-binding
subjects:
- kind: ServiceAccount
name: ingress-checker
namespace: default
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ingress-checker
name: ingress-checker
spec:
selector:
matchLabels:
app: ingress-checker
template:
metadata:
labels:
app: ingress-checker
spec:
serviceAccountName: ingress-checker
volumes:
- name: check-script
configMap:
name: check-script
containers:
- image: bitnami/kubectl
name: test
command: ["bash", "/mnt/checkScript.sh"]
volumeMounts:
- name: check-script
mountPath: /mnt
After applying the above manifest, the ingress-checker Deployment was created and started monitoring the web ingress resource:
$ kubectl apply -f all-in-one.yaml
serviceaccount/ingress-checker created
clusterrolebinding.rbac.authorization.k8s.io/ingress-checker-binding created
deployment.apps/ingress-checker created
$ kubectl get deploy,pod | grep ingress-checker
deployment.apps/ingress-checker 1/1 1
pod/ingress-checker-6846474c9-rhszh 1/1 Running
Finally, we can check how it works.
From one terminal window I checked the ingress-checker logs with the $ kubectl logs -f deploy/ingress-checker command.
From another terminal window, I modified the web ingress resource.
Second terminal window:
$ kubectl edit ing web
ingress.networking.k8s.io/web edited
First terminal window:
$ kubectl logs -f deploy/ingress-checker
Modified !!!
As you can see, it works as expected. We have the ingress-checker Deployment that monitors changes to the web ingress resource.

Related

How to add a VirtualServer on top of a service in Kubernetes?

I have the following deployment and service config files for deploying a service to Kubernetes;
apiVersion: apps/v1
kind: Deployment
metadata:
name: write-your-own-name
spec:
replicas: 1
selector:
matchLabels:
run: write-your-own-name
template:
metadata:
labels:
run: write-your-own-name
spec:
containers:
- name: write-your-own-name-container
image: gcr.io/cool-adviser-345716/automl:manual__2022-10-10T07_54_04_00_00
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: write-your-own-name
spec:
ports:
- port: 80
targetPort: 80
selector:
run: write-your-own-name
type: LoadBalancer
K8s exposes the service on the endpoint (let's say) http://12.239.21.88
I then created a namespace nginx-deploy and installed nginx-controller using the command
helm install controller-free nginx-stable/nginx-ingress \
--set controller.ingressClass=nginx-deployment \
--set controller.service.type=NodePort \
--set controller.service.httpPort.nodePort=30040 \
--set controller.enablePreviewPolicies=true \
--namespace nginx-deploy
I then added a rate limit policy
apiVersion: k8s.nginx.org/v1
kind: Policy
metadata:
name: rate-limit-policy
spec:
rateLimit:
rate: 1r/s
key: ${binary_remote_addr}
zoneSize: 10M
And then finally a VirtualServer
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: write-your-name-vs
spec:
ingressClassName: nginx-deployment
host: 12.239.21.88
policies:
- name: rate-limit-policy
upstreams:
- name: write-your-own-name
service: write-your-own-name
port: 80
routes:
- path: /
action:
pass: write-your-own-name
- path: /hehe
action:
redirect:
url: http://www.nginx.com
code: 301
Before adding the VirtualServer, I could go to 12.239.21.88:80 and access my service and I can still do that after adding the virtual server. But when I try accessing the page 12.239.21.88:80/hehe, I get detail not found error.
I am guessing that this is because the VirtualServer is not working on top of the service. How do I expose my service with a VirtualServer? Or alternatively, I want rate limiting on my service and how do I achieve this?
I used the following tutorial to get rate-limiting to work:
NGINX Tutorial: Protect Kubernetes APIs with Rate Limiting
I am sorry if the question is too long but I have been trying to figure out rate limiting for a while and can't get it to work. Thanks in advance.

Canary rollouts with linkerd and argo rollouts

I'm trying to configure a canary rollout for a demo, but I'm having trouble getting the traffic splitting to work with linkerd. The funny part is I was able to get this working with istio and i find istio to be much more complicated then linkerd.
I have a basic go-lang service define like this:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: fish
spec:
[...]
strategy:
canary:
canaryService: canary-svc
stableService: stable-svc
trafficRouting:
smi: {}
steps:
- setWeight: 5
- pause: {}
- setWeight: 20
- pause: {}
- setWeight: 50
- pause: {}
- setWeight: 80
- pause: {}
---
apiVersion: v1
kind: Service
metadata:
name: canary-svc
spec:
selector:
app: fish
ports:
- name: http
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: stable-svc
spec:
selector:
app: fish
ports:
- name: http
port: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: fish
annotations:
kubernetes.io/ingress.class: 'nginx'
cert-manager.io/cluster-issuer: letsencrypt-production
cert-manager.io/acme-challenge-type: dns01
external-dns.alpha.kubernetes.io/hostname: fish.local
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
spec:
rules:
- host: fish.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: stable-svc
port:
number: 8080
When I do the deploy (sync) via ArgoCD I can see the traffic split is 50/50:
- apiVersion: split.smi-spec.io/v1alpha2
kind: TrafficSplit
metadata:
[...]
name: fish
namespace: default
spec:
backends:
- service: canary-svc
weight: "50"
- service: stable-svc
weight: "50"
service: stable-svc
However doing a curl command in a while loop i only get back the stable-svc. The only time i see a change is after I have completely moved the service to 100%.
I tried to follow this: https://argoproj.github.io/argo-rollouts/getting-started/smi/
Any help would be greatly appreciated.
Thanks
After reading this: https://linkerd.io/2.10/tasks/using-ingress/ I discovered you need to modify your ingress controller with a special annotation:
$ kubectl get deployment <ingress-controller> -n <ingress-namespace> -o yaml | linkerd inject --ingress - | kubectl apply -f -
TLDR; if you want Linkerd functionality like Service Profiles, Traffic Splits, etc, there is additional configuration required to make the Ingress controller’s Linkerd proxy run in ingress mode.
So there's a bit more context in this issue but the TL;DR is ingresses tend to target individual pods instead of the service address. Putting Linkerd's proxy in ingress mode tells it to override that behaviour. NGINX does already have a setting that will let it hit services instead of endpoints directly, you can see that in their docs here.

app on path instead of root not working for Kubernetes Ingress

I have an issue at work with K8s Ingress and I will use fake examples here to illustrate my point.
Assume I have an app called Tweeta and my company is called ABC. My app currently sits on tweeta.abc.com.
But we want to migrate our app to app.abc.com/tweeta.
My current ingress in K8s is as belows:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tweeta-ingress
spec:
rules:
- host: tweeta.abc.com
http:
paths:
- path: /
backend:
serviceName: tweeta-frontend
servicePort: 80
- path: /api
backend:
serviceName: tweeta-backend
servicePort: 80
For migration, I added a second ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tweeta-ingress-v2
spec:
rules:
- host: app.abc.com
http:
paths:
- path: /tweeta
backend:
serviceName: tweeta-frontend
servicePort: 80
- path: /tweeta/api
backend:
serviceName: tweeta-backend
servicePort: 80
For sake of continuity, I would like to have 2 ingresses pointing to my services at the same time. When the new domain is ready and working, I would just need to tear down the old ingress.
However, I am not getting any luck with the new domain with this ingress. Is it because it is hosted on a path and the k8s ingress needs to host on root? Or is it a configuration I would need to do on the nginx side?
As far as I tried, I couldn't reproduce your problem. So I decided to describe how I tried to reproduce it, so you can follow the same steps and depending on where/if you fail, we can find what is causing the issue.
First of all, make sure you are using a NGINX Ingress as it's more powerful.
I installed my NGINX Ingress using Helm following these steps:
$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com
$ helm repo update
$ helm install nginx-ingress stable/nginx-ingress
For the deployment, we are going to use an example from here.
Deploy a hello, world app
Create a Deployment using the following command:
kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0
Output:
deployment.apps/web created
Expose the Deployment:
kubectl expose deployment web --type=NodePort --port=8080
Output:
service/web exposed
Create Second Deployment
Create a v2 Deployment using the following command:
kubectl create deployment web2 --image=gcr.io/google-samples/hello-app:2.0
Output:
deployment.apps/web2 created
Expose the Deployment:
kubectl expose deployment web2 --port=8080 --type=NodePort
Output:
service/web2 exposed
It this point we have the Deployments and Services running:
$ kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
web 1/1 1 1 24m
web2 1/1 1 1 22m
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d5h
nginx-ingress-controller LoadBalancer 10.111.183.151 <pending> 80:31974/TCP,443:32396/TCP 54m
nginx-ingress-default-backend ClusterIP 10.104.30.84 <none> 80/TCP 54m
web NodePort 10.102.38.233 <none> 8080:31887/TCP 24m
web2 NodePort 10.108.203.191 <none> 8080:32405/TCP 23m
For the ingress, we are going to use the one provided in the question but we have to change the backends:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tweeta-ingress
spec:
rules:
- host: tweeta.abc.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 8080
- path: /api
backend:
serviceName: web2
servicePort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tweeta-ingress-v2
spec:
rules:
- host: app.abc.com
http:
paths:
- path: /tweeta
backend:
serviceName: web
servicePort: 8080
- path: /tweeta/api
backend:
serviceName: web2
servicePort: 8080
Now let's test our ingresses:
$ curl tweeta.abc.com
Hello, world!
Version: 1.0.0
Hostname: web-6785d44d5-j8bgk
$ curl tweeta.abc.com/api
Hello, world!
Version: 2.0.0
Hostname: web2-8474c56fd-lx55n
$ curl app.abc.com/tweeta
Hello, world!
Version: 1.0.0
Hostname: web-6785d44d5-j8bgk
$ curl app.abc.com/tweeta/api
Hello, world!
Version: 2.0.0
Hostname: web2-8474c56fd-lx55n
As can be seen, everything is working fine with no mods in your ingresses.
I assume your frontend Pod expects the path / and backend Pod expects the path /api
The first ingress config doesn't transform the request and it goes to the frontend(Fpod)/backend(Bpod) Pods as is:
http://tweeta.abc.com/ -> ingress -> svc -> Fpod: [ http://tweeta.abc.com/ ]
http://tweeta.abc.com/api -> ingress -> svc -> Bpod: [ http://tweeta.abc.com/api ]
but with second ingress it doesn't work as expected:
http://app.abc.com/tweeta -> ingress -> svc -> Fpod: [ http://app.abc.com/tweeta ]
http://app.abc.com/tweeta/api -> ingress -> svc -> Bpod: [ http://app.abc.com/tweeta/api ]
The Pod request path is changed from / to /tweeta and from /api to /tweeta/api. I guess it's not the expected behavior. Usually application in Pods doesn't care about Host header but Path must be correct. If your Pods aren't designed to respond to additional tweeta\ path, they likely respond with 404 (Not Found) when the second ingress is used.
To fix it you have to add rewrite annotation to remove tweeta path from the Pods' request:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tweeta-ingress-v2
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: app.abc.com
http:
paths:
- path: /tweeta(/|$)(.*)
backend:
serviceName: tweeta-frontend
servicePort: 80
- path: /tweeta(/)(api$|api/.*)
backend:
serviceName: tweeta-backend
servicePort: 80
The result will be as follows, which is exactly how it suppose to work:
http://app.abc.com/tweeta -> ingress -> svc -> Fpod: [ http://app.abc.com/ ]
http://app.abc.com/tweeta/blabla -> ingress -> svc -> Fpod: [ http://app.abc.com/blabla ]
http://app.abc.com/tweeta/api -> ingress -> svc -> Bpod: [ http://app.abc.com/api ]
http://app.abc.com/tweeta/api/blabla -> ingress -> svc -> Bpod: [ http://app.abc.com/api/blabla ]
To check ingress-controller logs and configuration use accordingly:
$ kubectl logs -n ingress-controller-namespace ingress-controller-pods-name
$ kubectl exec -it -n ingress-controller-namespace ingress-controller-pods-name -- cat /etc/nginx/nginx.conf > local-file-name.txt && less local-file-name.txt

Kubernetes deployment fails

I have Pod and Service ymal files in my system. I want to run these two using kubectl create -f <file> and connect from outside browser to test connectivity.Here what I have followed.
My Pod :
apiVersion: v1
kind: Pod
metadata:
name: client-nginx
labels:
component: web
spec:
containers:
- name: client
image: nginx
ports:
- containerPort: 3000
My Services file :
apiVersion: v1
kind: Service
metadata:
name: client-nginx-port
spec:
type: NodePort
ports:
- port: 3050
targetPort: 3000
nodePort: 31616
selector:
component: web
I used kubectl create -f my_pod.yaml and then kubectl get pods shows my pod client-nginx
And then kubectl create -f my_service.yaml, No errors here and then shows all the services.
When I try to curl to service, it gives
curl: (7) Failed to connect to 192.168.0.10 port 31616: Connection refused.
kubectl get deployments doesnt show my pod. Do I have to deploy it? I am a bit confused. If I use instructions given here, I can deploynginxsuccessfully and access from outside browsers.
I used instructions given here to test this.
Try with this service:
apiVersion: v1
kind: Service
metadata:
name: client-nginx-port
spec:
type: NodePort
ports:
- port: 3050
targetPort: 80
nodePort: 31616
selector:
component: web
You missed selector name to be given to pod yaml which will be picked by service where you have mentioned the selector as component
Use this in pod yaml
apiVersion: v1
kind: Pod
metadata:
name: client-nginx
labels:
component: web
spec:
selector:
component: nginx
containers:
- name: client
image: nginx
ports:
- containerPort: 3000
Useful links:
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/

kubernetes application throws DatastoreException, Missing or insufficient permissions. Service key file provided

I am deploying java application at google kubernetes engine. Application correctly starts but fails when trying to request data. Exception is "DatastoreException, Missing or insufficient permissions". I created service account with "Owner" role and provided service account key to kubernetes. Here is how i apply kubernetes deployment:
# delete old secret
kubectl delete secret google-key --ignore-not-found
# file with key
kubectl create secret generic google-key --from-file=key.json
kubectl apply -f prod-kubernetes.yml
Here is deployment config:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: user
labels:
app: user
spec:
type: NodePort
ports:
- port: 8000
name: user
targetPort: 8000
nodePort: 32756
selector:
app: user
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: userdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: user
spec:
volumes:
- name: google-cloud-key
secret:
secretName: google-key
containers:
- name: usercontainer
image: gcr.io/proj/user:v1
imagePullPolicy: Always
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
ports:
- containerPort: 8000
I wonder why it is not working? I have used this config in previous deployment and had success.
UPD: I made sure that /var/secrets/google/key.json exist at pod. I print Files.exists(System.getEnv("GOOGLE_APPLICATION_CREDENTIALS")) to log. I also print content of this file - it seems not corrupted.
Solved, reason was incorrect evn name GOOGLE_CLOUD_PROJECT

Resources