Use kubectl patch to add DNS Rewrite Rule to CoreDNS Configmap - patch

I want to use the kubectl patch command to add a DNS rewrite rule to the coredns configmap, as described at Custom DNS Entries For Kubernetes. The default config map looks like this:
apiVersion: v1
data:
Corefile: |
.:53 {
log
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
....
and I want to add the line
rewrite name old.name new.name
but how to specify adding a line within the ".:53" element is confounding me.
I know that I can get a similar result using kubectl get ... | sed ... | kubectl replace -f - but that would look kind of ugly, plus I want to expand my knowledge of kubctl patch using JSON. Thanks!

You cannot modify ConfigMap using patch in your case.
data.Corefile is a key and its value (Corefile content) is of type: string.
It is treated by api-server as a string of bytes. You cannot patch part of a string with kubectl patch.
And second of all:
I want to expand my knowledge of kubctl patch using JSON
Corefile is not even a valid json file. Even if it was, api-server doesn't see a json/yaml, for api-server its just a string of random alphanumeric characters.
So what can you do?
You are left with kubectl get ... | sed ... | kubectl replace -f - , and this is a totally valid solution.

Related

Enforce a domain pattern that a service can use

I have a multi-tenant Kubernetes cluster. On it I have an nginx reverse proxy with load balancer and the domain *.example.com points to its IP.
Now, several namespaces are essentially grouped together as project A and project B (according to the different users).
How, can I ensure that any service in a namespace with label project=a, can have any domain like my-service.project-a.example.com, but not something like my-service.project-b.example.com or my-service.example.com? Please keep in mind, that I use NetworkPolicies to isolate the communication between the different projects, though communication with the nginx namespace and the reverse proxy is always possible.
Any ideas would be very welcome.
EDIT:
I made some progress as have been deploying Gatekeeper to my GKE clusters via Helm charts. Then I was trying to ensure that only Ingress hosts of the form ".project-name.example.com" should be allowed. For this, I have different namespaces that each have labels "project=a" or similar and each of these should only allow to use ingress of the form ".a.example.com". Hence I need that project label information for the respective namespaces. I wanted to deploy the following resources
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequiredingress
spec:
crd:
spec:
names:
kind: K8sRequiredIngress
validation:
# Schema for the `parameters` field
openAPIV3Schema:
type: object
properties:
labels:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredingress
operations := {"CREATE", "UPDATE"}
ns := input.review.object.metadata.namespace
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
input.request.kind.kind == "Ingress"
not data.kubernetes.namespaces[ns].labels.project
msg := sprintf("Ingress denied as namespace '%v' is missing 'project' label", [ns])
}
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
input.request.kind.kind == "Ingress"
operations[input.request.operation]
host := input.request.object.spec.rules[_].host
project := data.kubernetes.namespaces[ns].labels.project
not fqdn_matches(host, project)
msg := sprintf("invalid ingress host %v, has to be of the form *.%v.example.com", [host, project])
}
fqdn_matches(str, pattern) {
str_parts := split(str, ".")
count(str_parts) == 4
str_parts[1] == pattern
}
---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredIngress
metadata:
name: ns-must-have-gk
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Ingress"]
---
apiVersion: config.gatekeeper.sh/v1alpha1
kind: Config
metadata:
name: config
namespace: "gatekeeper-system"
spec:
sync:
syncOnly:
- group: ""
version: "v1"
kind: "Namespace"
However, when I try to setup everything in the cluster I keep getting:
kubectl apply -f constraint_template.yaml
Error from server: error when creating "constraint_template.yaml": admission webhook "validation.gatekeeper.sh" denied the request: invalid ConstraintTemplate: invalid data references: check refs failed on module {template}: errors (2):
disallowed ref data.kubernetes.namespaces[ns].labels.project
disallowed ref data.kubernetes.namespaces[ns].labels.project
Do you know how to fix that and what I did wrong. Also, in case you happen to know a better approach just let me know.
Alternative to other answer, you may use validation webhook to enfore by any parameter present in the request. Example, name,namespace, annotations, spec etc.
The validation webhook could be a service running in the cluster or External to cluster. This service would essentially make a logical decision based on the logic we put. For every request Sent by user, api server send a review request to the webhook and the validation webhook would either approve or reject the review.
You can read more about it here, more descriptive post by me here.
If you want to enforce this rule on k8s object such as configmap or ingress, I think you can use something like OPA
In Kubernetes, Admission Controllers enforce semantic validation of objects during create, update, and delete operations. With OPA you can enforce custom policies on Kubernetes objects without recompiling or reconfiguring the Kubernetes API server.
reference

cert manager is failing with Waiting for dns-01 challenge propagation: Could not determine authoritative nameservers

I have created cert-manager on aks-engine using below command
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml
my certificate spec
issuer spec
Im using nginx as ingress, I could see txt record in the azure dns zone created my azuredns service principle, but not sure what is the issue on nameservers
I ran into the same error... I suspect that it's because I'm using a mix of private and public Azure DNS entries and the record needs to get added to the public entry so letsencrypt can see it, however, cert-manager performs a check that the TXT record is visible before asking letsencrypt to perform the validation... I assume that the default DNS records cert-manager looks at is the private one, and because there's no TXT record there, it gets stuck on this error.
The way around it, as described on cert-manager.io is to override the default DNS using extraArgs (I'm doing this with terraform and helm):
resource "helm_release" "cert_manager" {
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
set {
name = "installCRDs"
value = "true"
}
set {
name = "extraArgs"
value = "{--dns01-recursive-nameservers-only,--dns01-recursive-nameservers=8.8.8.8:53\\,1.1.1.1:53}"
}
}
The issue for me, was that I was missing some annotations in the ingress:
cert-manager.io/cluster-issuer: hydrantid
kubernetes.io/tls-acme: 'true'
In my case I am using hydrantid as the issuer, but most people use letsencrypt I guess.
I had similar error when my certificate was stuck in pending and below is how i resolved it
kubectl get challenges
urChallengeName
then run the following
kubectl patch challenge/urChallengeName -p '{"metadata":{"finalizers":[]}}' --type=merge
and when u do get challenges again it should be gone

What is the command to execute command line arguments with NGINX Ingress Controller?

I feel like I'm missing something pretty basic here, but can't find what I'm looking for.
Referring to the NGINX Ingress Controller documentation regarding command line arguments how exactly would you use these? Are you calling a command on the nginx-ingress-controller pod with these arguments? If so, what is the command name?
Can you provide an example?
Command line arguments are accepted by the Ingress controller executable.This can be set in container spec of the nginx-ingress-controller Deployment manifest.
List of annotation document :
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
Command line argument document:
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/cli-arguments.md
If you will run the command
kubectl describe deployment/nginx-ingress-controller --namespace
You will find this snip :
Args:
--default-backend-service=$(POD_NAMESPACE)/default-http-backend
--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
--annotations-prefix=nginx.ingress.kubernetes.io
Where these all are command line arguments of nginx as suggested.From here you can also change the --annotations-prefix=nginx.ingress.kubernetes.io from here.
Default annotation in nginx is nginx.ingress.kubernetes.io.
!!! note The annotation prefix can be changed using the --annotations-prefix inside command line argument, but the default is nginx.ingress.kubernetes.io.
If you are using the Helm chart, then you can simply create a configmap named {{ include "ingress-nginx.fullname" . }}-tcp in the same namespace where the ingress controller is deployed. (Unfortunately, I wasn't able to figure out what the default value is for ingress-nginx.fullname... sorry. If someone knows, feel free to edit this answer.)
If you need to specify a different namespace for the configmap, then you might be able to use the .Values.tcp.configMapNamespace property, but honestly, I wasn't able to find it applied anywhere in the code, so YMMV.
## Allows customization of the tcp-services-configmap
##
tcp:
configMapNamespace: "" # defaults to .Release.Namespace
## Annotations to be added to the tcp config configmap
annotations: {}

Locating a valid NGINX template for nginx-ingress-controller? (Kubernetes)

I am trying to follow this tutorial on configuring nginx-ingress-controller for a Kubernetes cluster I deployed to AWS using kops.
https://daemonza.github.io/2017/02/13/kubernetes-nginx-ingress-controller/
When I run kubectl create -f ./nginx-ingress-controller.yml, the pods are created but error out. From what I can tell, the problem lies with the following portion of nginx-ingress-controller.yml:
volumes:
- name: tls-dhparam-vol
secret:
secretName: tls-dhparam
- name: nginx-template-volume
configMap:
name: nginx-template
items:
- key: nginx.tmpl
path: nginx.tmpl
Error shown on the pods:
MountVolume.SetUp failed for volume "nginx-template-volume" : configmaps "nginx-template" not found
This makes sense, because the tutorial does not have the reader create this configmap before creating the controller. I know that I need to create the configmap using:
kubectl create configmap nginx-template --from-file=nginx.tmpl=nginx.tmpl
I've done this using nginx.tmpl files found from sources like this, but they don't seem to work (always fail with invalid NGINX template errors). Log example:
I1117 16:29:49.344882 1 main.go:94] Using build: https://github.com/bprashanth/contrib.git - git-92b2bac
I1117 16:29:49.402732 1 main.go:123] Validated default/default-http-backend as the default backend
I1117 16:29:49.402901 1 main.go:80] mkdir /etc/nginx-ssl: file exists already exists
I1117 16:29:49.402951 1 ssl.go:127] using file '/etc/nginx-ssl/dhparam/dhparam.pem' for parameter ssl_dhparam
F1117 16:29:49.403962 1 main.go:71] invalid NGINX template: template: nginx.tmpl:1: function "where" not defined
The image version used is quite old, but I've tried newer versions with no luck.
containers:
- name: nginx-ingress-controller
image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
This thread is similar to my issue, but I don't quite understand the proposed solution. Where would I use docker cp to extract a usable template from? Seems like the templates I'm using use a language/syntax incompatible with Docker...?
To copy the nginx template file from the ingress controller pod to your local machine, you can first grab the name of the pod with kubectl get pods then run kubectl exec [POD_NAME] -it -- cat /etc/nginx/template/nginx.tmpl > nginx.tmpl.
This will leave you with the nginx.tmpl file you can then edit and push back up as a configmap. I would recommend though keeping custom changes to the template to a minimum as it can make it hard for you to update the controller in the future.
Hope this helps!

How to disable interception of errors by Ingress in a Tectonic kubernetes setup

I have a couple of NodeJS backends running as pods in a Kubernetes setup, with Ingress-managed nginx over it.
These backends are API servers, and can return 400, 404, or 500 responses during normal operations. These responses would provide meaningful data to the client; besides the status code, the response has a JSON-serialized structure in the body informing about the error cause or suggesting a solution.
However, Ingress will intercept these error responses, and return an error page. Thus the client does not receive the information that the service has tried to provide.
There's a closed ticket in the kubernetes-contrib repository suggesting that it is now possible to turn off error interception: https://github.com/kubernetes/contrib/issues/897. Being new to kubernetes/ingress, I cannot figure out how to apply this configuration in my situation.
For reference, this is the output of kubectl get ingress <ingress-name>: (redacted names and IPs)
Name: ingress-name-redacted
Namespace: default
Address: 127.0.0.1
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
public.service.example.com
/ service-name:80 (<none>)
Annotations:
rewrite-target: /
service-upstream: true
use-port-in-redirects: true
Events: <none>
I have solved this on Tectonic 1.7.9-tectonic.4.
In the Tectonic web UI, go to Workloads -> Config Maps and filter by namespace tectonic-system.
In the config maps shown, you should see one named "tectonic-custom-error".
Open it and go to the YAML editor.
In the data field you should have an entry like this:
custom-http-errors: '404, 500, 502, 503'
which configures which HTTP responses will be captured and be shown with the custom Tectonic error page.
If you don't want some of those, just remove them, or clear them all.
It should take effect as soon as you save the updated config map.
Of course, you could to the same from the command line with kubectl edit:
$> kubectl edit cm tectonic-custom-error --namespace=tectonic-system
Hope this helps :)

Resources