Failed to update lock: configmaps forbidden: User "system:serviceaccount:ingress - nginx

Getting the Below error:-
Failed to update lock: configmaps "ingress-controller-leader-internal-nginx-internal" is forbidden: User "system:serviceaccount:ingress-nginx-internal:ingress-nginx-internal" cannot update resource "configmaps" in API group "" in the namespace "ingress-nginx-internal"
I am using multiple ingress controller in my setup with two different namespaces.
Ingress-Nginx-internal
Ingress-Nginx-external
After installation everything works fine till 15 hrs, getting the above error.
Ingress-nginx-internal.yaml
https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.32.0/deploy/static/provider/aws/deploy.yaml
In the above deploy.YAML, I replace. sed 's/ingress-Nginx/ingress-Nginx-internal/g' deploy.yaml
the output of below command:-
# kubectl describe cm ingress-controller-leader-internal-nginx-internal -n ingress-nginx-internal
Name: ingress-controller-leader-internal-nginx-internal
Namespace: ingress-nginx-internal
Labels: <none>
Annotations: control-plane.alpha.kubernetes.io/leader:
{"holderIdentity":"ingress-nginx-internal-controller-657","leaseDurationSeconds":30,"acquireTime":"2020-07-24T06:06:27Z","ren...
Data
====
Events: <none>
Service:-
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: ingress-nginx-internal-2.0.3
app.kubernetes.io/name: ingress-nginx-internal
app.kubernetes.io/instance: ingress-nginx-internal
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-internal
namespace: ingress-nginx-internal

From the docs here
To run multiple NGINX ingress controllers (e.g. one which serves public traffic, one which serves "internal" traffic) the option --ingress-class must be changed to a value unique for the cluster within the definition of the replication controller. Here is a partial example:
spec:
template:
spec:
containers:
- name: nginx-ingress-internal-controller
args:
- /nginx-ingress-controller
- '--election-id=ingress-controller-leader-internal'
- '--ingress-class=nginx-internal'
- '--configmap=ingress/nginx-ingress-internal-controller'
When you create the ingress resource for internal you need to specify the ingress class as specified above i.e nginx-internal
Check the permission of the service account using below command. It it returns no then create the role and rolebinding.
kubectl auth can-i update configmaps --as=system:serviceaccount:ingress-nginx-internal:ingress-nginx-internal -n ingress-nginx-internal
RBAC
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cm-role
namespace: ingress-nginx-internal
rules:
- apiGroups:
- ""
resources:
- configmap
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cm-rolebinding
namespace: ingress-nginx-internal
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cm-role
subjects:
- kind: ServiceAccount
name: ingress-nginx-internal
namespace: ingress-nginx-internal

If you use the multi ingress remember to change the resource name in the Role.

For default namespace ingress-nginx
rolebinding will looks like
Misstake in example above is that we checking access for configmaps but trying to apply role binding for configmap here is working sample:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cm-role
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cm-rolebinding
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cm-role
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx

Related

How to add a VirtualServer on top of a service in Kubernetes?

I have the following deployment and service config files for deploying a service to Kubernetes;
apiVersion: apps/v1
kind: Deployment
metadata:
name: write-your-own-name
spec:
replicas: 1
selector:
matchLabels:
run: write-your-own-name
template:
metadata:
labels:
run: write-your-own-name
spec:
containers:
- name: write-your-own-name-container
image: gcr.io/cool-adviser-345716/automl:manual__2022-10-10T07_54_04_00_00
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: write-your-own-name
spec:
ports:
- port: 80
targetPort: 80
selector:
run: write-your-own-name
type: LoadBalancer
K8s exposes the service on the endpoint (let's say) http://12.239.21.88
I then created a namespace nginx-deploy and installed nginx-controller using the command
helm install controller-free nginx-stable/nginx-ingress \
--set controller.ingressClass=nginx-deployment \
--set controller.service.type=NodePort \
--set controller.service.httpPort.nodePort=30040 \
--set controller.enablePreviewPolicies=true \
--namespace nginx-deploy
I then added a rate limit policy
apiVersion: k8s.nginx.org/v1
kind: Policy
metadata:
name: rate-limit-policy
spec:
rateLimit:
rate: 1r/s
key: ${binary_remote_addr}
zoneSize: 10M
And then finally a VirtualServer
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: write-your-name-vs
spec:
ingressClassName: nginx-deployment
host: 12.239.21.88
policies:
- name: rate-limit-policy
upstreams:
- name: write-your-own-name
service: write-your-own-name
port: 80
routes:
- path: /
action:
pass: write-your-own-name
- path: /hehe
action:
redirect:
url: http://www.nginx.com
code: 301
Before adding the VirtualServer, I could go to 12.239.21.88:80 and access my service and I can still do that after adding the virtual server. But when I try accessing the page 12.239.21.88:80/hehe, I get detail not found error.
I am guessing that this is because the VirtualServer is not working on top of the service. How do I expose my service with a VirtualServer? Or alternatively, I want rate limiting on my service and how do I achieve this?
I used the following tutorial to get rate-limiting to work:
NGINX Tutorial: Protect Kubernetes APIs with Rate Limiting
I am sorry if the question is too long but I have been trying to figure out rate limiting for a while and can't get it to work. Thanks in advance.

Kubernetes nginx ingress controller cannot upload size more than 1mb

I am fairly new to GCP and I have a rest URI to upload large files.
I have a ngress-nginx-controller service and want to change it to upload files larger than 1mb and set a limit.
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/version":"0.35.0","helm.sh/chart":"ingress-nginx-2.13.0"},"name":"ingress-nginx-controller","namespace":"ingress-nginx"},"spec":{"externalTrafficPolicy":"Local","ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":"http"},{"name":"https","port":443,"protocol":"TCP","targetPort":"https"}],"selector":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/name":"ingress-nginx"},"type":"LoadBalancer"}}
creationTimestamp: "2020-09-21T18:37:27Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 0.35.0
helm.sh/chart: ingress-nginx-2.13.0
name: ingress-nginx-controller
namespace: ingress-nginx
This is the error it throws :
<html>
<head><title>413 Request Entity Too Large</title></head>
<body>
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx/1.19.2</center>
</body>
</html>
If you need to increase the body size of files you upload via the ingress controller, you need to add an annotation to your ingress resource:
nginx.ingress.kubernetes.io/proxy-body-size: 8m
Documentation available here: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-max-body-size
If you're using ingress-nginx helm chart
you need to set the values in the following way
controller:
replicaCount: 2
service:
annotations:
...
config:
proxy-body-size: "8m"

Can you patch an arbitrary resource with no base via kustomize?

I've been trying to patch a Deployment, declared and applied by a kops addon (Ebs drivers).
Unfortunately, after trying the variety of patching strategies, it seems that I am unable to patch a resource that doesn't have a base declared in my folder structure.
Note that I'm using FluxCD on top for reconciliation, which doesn't see any change for this resource when pushing the patch.
Here is an extract of the deployment automatically generated and applied by Kops that I want to change :
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
addon.kops.k8s.io/name: aws-ebs-csi-driver.addons.k8s.io
addon.kops.k8s.io/version: 1.0.0-kops.1
app.kubernetes.io/instance: aws-ebs-csi-driver
app.kubernetes.io/managed-by: kops
app.kubernetes.io/name: aws-ebs-csi-driver
app.kubernetes.io/version: v1.0.0
k8s-addon: aws-ebs-csi-driver.addons.k8s.io
**super: unpatched**
name: ebs-csi-controller
namespace: kube-system
My kustomization file :
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: flux-system
patches:
- target:
kind: Deployment
name: ebs-csi-controller
namespace: kube-system
path: patch-ebs.yaml
and the actual patch-ebs.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ebs-csi-controller
namespace: kube-system
labels:
super: patched
Also tried with a Json patch patch-ebs.json :
[
{"op": "replace", "path": "/metadata/labels/super", "value": "Patched"}
]
Running a kustomize build doesn't generate any output ;
Creating a deployment file which is refered as a resource/base generate a proper patch that can be applied.
Is it a limitation of Kustomize, or am I missing something along the lines?
Thanks for your help !
Kustomize depends on the referenced resources to be included as resources, or to be generated using the built in generators for them to be patched, or mutated in any way.
One caveat would be that you could include a resource in your kustomization that matches your existing ebs-csi-controller Deployment and Kustomize will build a resource that can be applied on top of your existing deployment.
From what I can see you are trying to patch a kind of addon (managed by kops controller). Try this one from FluxCD docs
So, your case will be like patch-ebs.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kustomize.toolkit.fluxcd.io/prune: disabled
kustomize.toolkit.fluxcd.io/ssa: merge
name: ebs-csi-controller
namespace: kube-system
labels:
super: patched
and kustomization file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kube-system
resources:
- patch-ebs.yaml

Automatically update Kubernetes resource if another resource is created

I currently have the following challenge: We are using two ingress controllers in our cloud Kubernetes cluster, a custom Nginx ingress controller, and a cloud ingress controller on the load balancer.
The challenge now is when creating an Nginx-ingress element, that an automatic update on the cloud ingress controller ingress element is triggered. The ingress controller of the cloud provider does not support host specifications like *.example.com, so we have to work around it.
Cloud Provider Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cloudprovider-listener-https
namespace: nginx-ingress-controller
annotations:
kubernetes.io/elb.id: "<loadbalancerid>"
kubernetes.io/elb.port: "<loadbalancerport>"
kubernetes.io/ingress.class: "<cloudprovider>"
spec:
rules:
- host: "customer1.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ingress-nginx-controller
port:
number: 80
property:
ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH
- host: "customer2.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ingress-nginx-controller
port:
number: 80
property:
ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH
- host: "customer3.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ingress-nginx-controller
port:
number: 80
property:
ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH
tls:
- hosts:
- "*.example.com"
secretName: wildcard-cert
Nginx Ingress Config for each Customer
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web
namespace: <namespace>
annotations:
kubernetes.io/ingress.class: nginx
# ... several nginx-ingress annotations
spec:
rules:
- host: "customer<x>.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: <port>
property:
ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH
Currently, the cloud ingress resource is created dynamically by the helm, but triggered externally and the paths are queried by script "kubectl get ing -A" + magic.
Is there a way to monitor Nginx ingresses internally in the cluster and automatically trigger an update of the cloud ingress for new ingress elements?
Or am I going about this completely wrong?
Hope you guys can help.
I'll describe a solution that requires running kubectl commands from within the Pod.
In short, you can use a script to continuously monitor the .metadata.generation value of the ingress resource, and when this value is increased, you can run your "kubectl get ing -A + magic".
The .metadata.generation value is incremented for all changes, except for changes to .metadata or .status.
Below, I will provide a detailed step-by-step explanation.
To check the generation of the web ingress resource, we can use:
### kubectl get ingress <INGRESS_RESOURCE_NAME> -n default --template '{{.metadata.generation}}'
$ kubectl get ingress web -n default --template '{{.metadata.generation}}'
1
To constantly monitor this value, we can create a Bash script:
NOTE: This script compares generation to newGeneration in a while loop to detect any .metadata.generation changes.
$ cat check-script.sh
#!/bin/bash
generation="$(kubectl get ingress web -n default --template '{{.metadata.generation}}')"
while true; do
newGeneration="$(kubectl get ingress web -n default --template '{{.metadata.generation}}')"
if [[ "${generation}" != "${newGeneration}" ]]; then
echo "Modified !!!" # Here you can additionally add "magic"
generation=${newGeneration}
fi
We want to run this script from inside the Pod, so I converted it to ConfigMap which will allow us to mount this script in a volume (see: Using ConfigMaps as files from a Pod):
$ cat check-script-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: check-script
data:
checkScript.sh: |
#!/bin/bash
generation="$(kubectl get ingress web -n default --template '{{.metadata.generation}}')"
while true; do
newGeneration="$(kubectl get ingress web -n default --template '{{.metadata.generation}}')"
if [[ "${generation}" != "${newGeneration}" ]]; then
echo "Modified !!!"
generation=${newGeneration}
fi
done
$ kubectl apply -f check-script-configmap.yml
configmap/check-script created
For security reasons, I've created a separate ingress-checker ServiceAccount with the view Role assigned and our Pod will run under this ServiceAccount:
NOTE: I've created a Deployment instead of a single Pod.
$ cat all-in-one.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-checker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-checker-binding
subjects:
- kind: ServiceAccount
name: ingress-checker
namespace: default
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ingress-checker
name: ingress-checker
spec:
selector:
matchLabels:
app: ingress-checker
template:
metadata:
labels:
app: ingress-checker
spec:
serviceAccountName: ingress-checker
volumes:
- name: check-script
configMap:
name: check-script
containers:
- image: bitnami/kubectl
name: test
command: ["bash", "/mnt/checkScript.sh"]
volumeMounts:
- name: check-script
mountPath: /mnt
After applying the above manifest, the ingress-checker Deployment was created and started monitoring the web ingress resource:
$ kubectl apply -f all-in-one.yaml
serviceaccount/ingress-checker created
clusterrolebinding.rbac.authorization.k8s.io/ingress-checker-binding created
deployment.apps/ingress-checker created
$ kubectl get deploy,pod | grep ingress-checker
deployment.apps/ingress-checker 1/1 1
pod/ingress-checker-6846474c9-rhszh 1/1 Running
Finally, we can check how it works.
From one terminal window I checked the ingress-checker logs with the $ kubectl logs -f deploy/ingress-checker command.
From another terminal window, I modified the web ingress resource.
Second terminal window:
$ kubectl edit ing web
ingress.networking.k8s.io/web edited
First terminal window:
$ kubectl logs -f deploy/ingress-checker
Modified !!!
As you can see, it works as expected. We have the ingress-checker Deployment that monitors changes to the web ingress resource.

kubernetes application throws DatastoreException, Missing or insufficient permissions. Service key file provided

I am deploying java application at google kubernetes engine. Application correctly starts but fails when trying to request data. Exception is "DatastoreException, Missing or insufficient permissions". I created service account with "Owner" role and provided service account key to kubernetes. Here is how i apply kubernetes deployment:
# delete old secret
kubectl delete secret google-key --ignore-not-found
# file with key
kubectl create secret generic google-key --from-file=key.json
kubectl apply -f prod-kubernetes.yml
Here is deployment config:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: user
labels:
app: user
spec:
type: NodePort
ports:
- port: 8000
name: user
targetPort: 8000
nodePort: 32756
selector:
app: user
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: userdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: user
spec:
volumes:
- name: google-cloud-key
secret:
secretName: google-key
containers:
- name: usercontainer
image: gcr.io/proj/user:v1
imagePullPolicy: Always
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
ports:
- containerPort: 8000
I wonder why it is not working? I have used this config in previous deployment and had success.
UPD: I made sure that /var/secrets/google/key.json exist at pod. I print Files.exists(System.getEnv("GOOGLE_APPLICATION_CREDENTIALS")) to log. I also print content of this file - it seems not corrupted.
Solved, reason was incorrect evn name GOOGLE_CLOUD_PROJECT

Resources