I've been trying to patch a Deployment, declared and applied by a kops addon (Ebs drivers).
Unfortunately, after trying the variety of patching strategies, it seems that I am unable to patch a resource that doesn't have a base declared in my folder structure.
Note that I'm using FluxCD on top for reconciliation, which doesn't see any change for this resource when pushing the patch.
Here is an extract of the deployment automatically generated and applied by Kops that I want to change :
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
addon.kops.k8s.io/name: aws-ebs-csi-driver.addons.k8s.io
addon.kops.k8s.io/version: 1.0.0-kops.1
app.kubernetes.io/instance: aws-ebs-csi-driver
app.kubernetes.io/managed-by: kops
app.kubernetes.io/name: aws-ebs-csi-driver
app.kubernetes.io/version: v1.0.0
k8s-addon: aws-ebs-csi-driver.addons.k8s.io
**super: unpatched**
name: ebs-csi-controller
namespace: kube-system
My kustomization file :
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: flux-system
patches:
- target:
kind: Deployment
name: ebs-csi-controller
namespace: kube-system
path: patch-ebs.yaml
and the actual patch-ebs.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ebs-csi-controller
namespace: kube-system
labels:
super: patched
Also tried with a Json patch patch-ebs.json :
[
{"op": "replace", "path": "/metadata/labels/super", "value": "Patched"}
]
Running a kustomize build doesn't generate any output ;
Creating a deployment file which is refered as a resource/base generate a proper patch that can be applied.
Is it a limitation of Kustomize, or am I missing something along the lines?
Thanks for your help !
Kustomize depends on the referenced resources to be included as resources, or to be generated using the built in generators for them to be patched, or mutated in any way.
One caveat would be that you could include a resource in your kustomization that matches your existing ebs-csi-controller Deployment and Kustomize will build a resource that can be applied on top of your existing deployment.
From what I can see you are trying to patch a kind of addon (managed by kops controller). Try this one from FluxCD docs
So, your case will be like patch-ebs.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kustomize.toolkit.fluxcd.io/prune: disabled
kustomize.toolkit.fluxcd.io/ssa: merge
name: ebs-csi-controller
namespace: kube-system
labels:
super: patched
and kustomization file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kube-system
resources:
- patch-ebs.yaml
Related
I am fairly new to GCP and I have a rest URI to upload large files.
I have a ngress-nginx-controller service and want to change it to upload files larger than 1mb and set a limit.
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/version":"0.35.0","helm.sh/chart":"ingress-nginx-2.13.0"},"name":"ingress-nginx-controller","namespace":"ingress-nginx"},"spec":{"externalTrafficPolicy":"Local","ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":"http"},{"name":"https","port":443,"protocol":"TCP","targetPort":"https"}],"selector":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/name":"ingress-nginx"},"type":"LoadBalancer"}}
creationTimestamp: "2020-09-21T18:37:27Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 0.35.0
helm.sh/chart: ingress-nginx-2.13.0
name: ingress-nginx-controller
namespace: ingress-nginx
This is the error it throws :
<html>
<head><title>413 Request Entity Too Large</title></head>
<body>
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx/1.19.2</center>
</body>
</html>
If you need to increase the body size of files you upload via the ingress controller, you need to add an annotation to your ingress resource:
nginx.ingress.kubernetes.io/proxy-body-size: 8m
Documentation available here: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-max-body-size
If you're using ingress-nginx helm chart
you need to set the values in the following way
controller:
replicaCount: 2
service:
annotations:
...
config:
proxy-body-size: "8m"
I was wondering how to write an helm release yaml file using the official airflow helm chart and overwriting the values.yaml file.
I'm trying to use this config file to deploy airflow with flux on a kubernetes cluster.
I tried :
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: airflow
namespace: dev
spec:
releaseName: airflow-dev
chart:
repository: https://airflow.apache.org
name: airflow
Did I miss something ?
Thank you in advance
First you need to create a helm resource artifact like this. lets name it as charts.yml
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: airflow
namespace: flux
spec:
interval: 1m
url: https://airflow.apache.org
and then you can define your manifest as
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: airflow
namespace: dev
spec:
interval: 5m
chart:
spec:
chart: airflow
sourceRef:
kind: HelmRepository
name: airflow
namespace: flux
interval: 1m
the flux version used is flux version 0.27.0
Getting the Below error:-
Failed to update lock: configmaps "ingress-controller-leader-internal-nginx-internal" is forbidden: User "system:serviceaccount:ingress-nginx-internal:ingress-nginx-internal" cannot update resource "configmaps" in API group "" in the namespace "ingress-nginx-internal"
I am using multiple ingress controller in my setup with two different namespaces.
Ingress-Nginx-internal
Ingress-Nginx-external
After installation everything works fine till 15 hrs, getting the above error.
Ingress-nginx-internal.yaml
https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.32.0/deploy/static/provider/aws/deploy.yaml
In the above deploy.YAML, I replace. sed 's/ingress-Nginx/ingress-Nginx-internal/g' deploy.yaml
the output of below command:-
# kubectl describe cm ingress-controller-leader-internal-nginx-internal -n ingress-nginx-internal
Name: ingress-controller-leader-internal-nginx-internal
Namespace: ingress-nginx-internal
Labels: <none>
Annotations: control-plane.alpha.kubernetes.io/leader:
{"holderIdentity":"ingress-nginx-internal-controller-657","leaseDurationSeconds":30,"acquireTime":"2020-07-24T06:06:27Z","ren...
Data
====
Events: <none>
Service:-
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: ingress-nginx-internal-2.0.3
app.kubernetes.io/name: ingress-nginx-internal
app.kubernetes.io/instance: ingress-nginx-internal
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-internal
namespace: ingress-nginx-internal
From the docs here
To run multiple NGINX ingress controllers (e.g. one which serves public traffic, one which serves "internal" traffic) the option --ingress-class must be changed to a value unique for the cluster within the definition of the replication controller. Here is a partial example:
spec:
template:
spec:
containers:
- name: nginx-ingress-internal-controller
args:
- /nginx-ingress-controller
- '--election-id=ingress-controller-leader-internal'
- '--ingress-class=nginx-internal'
- '--configmap=ingress/nginx-ingress-internal-controller'
When you create the ingress resource for internal you need to specify the ingress class as specified above i.e nginx-internal
Check the permission of the service account using below command. It it returns no then create the role and rolebinding.
kubectl auth can-i update configmaps --as=system:serviceaccount:ingress-nginx-internal:ingress-nginx-internal -n ingress-nginx-internal
RBAC
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cm-role
namespace: ingress-nginx-internal
rules:
- apiGroups:
- ""
resources:
- configmap
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cm-rolebinding
namespace: ingress-nginx-internal
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cm-role
subjects:
- kind: ServiceAccount
name: ingress-nginx-internal
namespace: ingress-nginx-internal
If you use the multi ingress remember to change the resource name in the Role.
For default namespace ingress-nginx
rolebinding will looks like
Misstake in example above is that we checking access for configmaps but trying to apply role binding for configmap here is working sample:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cm-role
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cm-rolebinding
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cm-role
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
I am testing out something with the PV and wanted to get some clarification. We have an 18 node cluster(using Docker EE) and we have mounted NFS share on each of this node to be used for the k8s persistent storage. I created a PV (using hostPath) to bind it with my nginx deployment(mounting the /usr/share/nginx/html to PV).
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-test-namespace-pv
namespace: test-namespace
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/nfs_share/docker/mynginx/demo"
How to create the PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-test-namespace-pvc
namespace: test-namespace
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
Deployment File:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mynginx
specs:
selector:
matchLabels:
run: mynginx-apps
replicas:2
template:
metadata:
labels:
run: mynginx-apps
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: nfs-test-namespace-pvc
containers:
- name: mynginx
image: dtr.midev.spglobal.com/spgmi/base:mynginx-v1
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
So i assume when my pod starts the default index.html file from the nginx image should be available at the /usr/share/nginx/html within my pod and it should also be copied/available at my /nfs_share/mynginx/demo.
However i am not seeing any file here and when i expose this deployment and access the service it gives me 403 error as the index file is not available. Now when i create an html file either from inside the pod or from the node on the nfs share mounted as PV, it works as expected.
Is my assumption of the default file getting copied to hostpath correct? or am i missing something?
Your /nfs_share/docker/mynginx/demo will not be available in pod, explanation is available here:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
The configuration file specifies that the volume is at /mnt/data on the cluster’s Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the volume can be mounted as read-write by a single Node. It defines the StorageClass name manual for the PersistentVolume, which will be used to bind PersistentVolumeClaim requests to this PersistentVolume.
You do not see PV on your pod, it's being used to utilize as PVC which then can be mounted inside a pod.
You can read the whole article Configure a Pod to Use a PersistentVolume for Storage which should answer all the questions.
the "/mnt/data" directory should be created on the node which your pod running actually.
i have used item proxy-body-size as describe in document, and recreate my ingress.But it has no effect on the ingress-controller.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fileupload-ingress
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 100m
nginx.org/rewrites: "serviceName=fileupload-service rewrite=/;"
and then i changed my configmap to change proxy-body-size.But it still doesn't work globally.
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: nginx-ingress
data:
proxy-body-size: "100m"
here's the document
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#rewrite
what's wrong with my ingress..help!!!
enter image description here
There are different ingress controllers and the annotations for them vary.
So for kubernetes/ingress-nginx annotations starts with nginx.ingress and for
nginxinc/kubernetes the annotations starts with nginx.org.
Here is also a good article showing more differences between them.