I'm facing a trouble that I need certs for my Keycloak inside k8s cluster to use nginx ingress. Which is the easiest way to add them?
I started like this:
kubectl create secret tls tls-keycloak-ingress --cert=localtest.me.crt --key=localtest.me.pem
And then include them via secret in Chart yaml:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
path: /auth/?(.*)
hosts:
- keycloak.localtest.me
tls:
- hosts:
- keycloak.localtest.me
secretName: tls-keycloak-ingress
But should I create them on host machine? Or with kubectl somehow?
Typically solved by adding a cert-manager into the cluster. It then tracks all ingress objects tls sections and issues using the provided LE account:
https://cert-manager.io/docs/tutorials/acme/ingress/
It's not only issues and stores cert to an appropriate secret, but also renews automatically.
NOTE: if you are using helm3 skip tiller step.
Related
I have a werid problem - when asking for my internal hostname, xxx.home.arpa via e.g openssl s_client -connect xxx.home.arpa:443 one (example) pod
- image: docker.io/library/node:8.17.0-slim
name: node
args:
- "86400"
command:
- sleep
is getting response with DEFAULT NGINX INGRESS CERTIFICATE.
Other pod in the same namespace for the same command is getting response with my custom certificate.
Question:
Why one pod RECEIVES different cert for the same request?
For the purpose of this problem, please assume that cert-manager and certs should be properly configured - they are working in most of the system, it's only few pods that are misbehaving
Configuration: k8s nginx ingress, calico CNI, custom coredns svc which manages DNS responses (might be important?), my own CA authority.
e:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: ca-issuer
kubernetes.io/ingress.class: nginx
creationTimestamp: "2022-03-13T06:54:17Z"
generation: 1
name: gerrit-ingress
namespace: gerrit
resourceVersion: "739842"
uid: f22034ab-0ed8-4779-b01e-2738e6f63eb7
spec:
rules:
- host: gerrit.home.arpa
http:
paths:
- backend:
service:
name: gerrit-gerrit-service
port:
number: 80
pathType: ImplementationSpecific
tls:
- hosts:
- gerrit.home.arpa
secretName: gerrit-tls
status:
loadBalancer:
ingress:
- ip: 192.168.10.2
Most of the configuration (Except DNS) is up here.
As it turns out, my initial guesses were far off - particular container had a set of tools which were both configured to not send servername (Or not support SNI at all, which was the problem), specifically yarn:1.x and openssl:1.0.x.
The problem was with SNI of course, newer openssl or curl do use -servername by default satisfying SNI requirements.
To this I've considered two solutions:
Wildcard DNS for the clients that do not support SNI, which is easier but does not feel secure
TLS termination with reverse proxy allowing me to transparently use client with SNI support, which I haven't yet tried.
I went with wildcard DNS, though I don't feel that this should be done in prod. :)
I run a bare-metal Kubernetes cluster and want to map services onto URLs instead of ports (I used NodePort so far).
To achieve this I tried to install an IngressController to be able to deploy Ingress objects containing routing.
I installed the IngressController via helm:
helm install my-ingress helm install my-ingress stable/nginx-ingress
and the deployment worked fine so far. To just use the node's domain name, I enabled hostNetwork: true in the nginx-ingress-controller.
Then, I created an Ingress deployment with this definition:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
which also deployed fine. Finally, when I try to access http://my-url.com/testpath I get a login-prompt. I nowhere set up login-credentials nor do I intend to do so as the services should be publicly available and/or handle authentication on their own.
How do I disable this behavior? I want to access the services just as I would use a NodePort solution.
To clarify the case I am posting answer (from comments area) as Community Wiki.
The problem here was not in configuration but in environment - there was running another ingress in the pod during Longhorn' deployment. This situation led to force basic authentication to both ones.
To resolve that problem it was necessary to to clean up all deployments.
I'm migrating services into a kubernetes cluster on minikube, these services require a self-signed certificate on load, accessing the service via NodePort works perfectly and demands the certificate in the browser (picture below), but accessing via the ingress host (the domain is modified locally in /etc/hosts) provides me with a Kubernetes Ingress Controller Fake Certificate by Acme and skips my self-signed cert without any message.
The SSLs should be decrypted inside the app and not in the Ingress, and the tls-acme: "false" flag does not work and still gives me the fake cert
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# decryption of tls occurs in the backend service
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/tls-acme: "false"
spec:
rules:
- host: admin.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: admin-service
port:
number: 443
when signing in it should show the following before loading:
minikube version: v1.15.1
kubectl version: 1.19
using ingress-nginx 3.18.0
The problem turned out to be a bug on Minikube, and also having to enable ssl passthrough in the nginx controller (in addition to the annotation) with the flag --enable-ssl-passthrough=true.
I was doing all my cluster testing on a Minikube cluster version v1.15.1 with kubernetes v1.19.4 where ssl passthrough failed, and after following the guidance in the ingress-nginx GitHub issue, I discovered that the issue didn't replicate in kind, so I tried deploying my app on a new AWS cluster (k8 version 1.18) and everything worked great.
I'm trying to follow up this example https://github.com/oauth2-proxy/oauth2-proxy/tree/master/contrib/local-environment/kubernetes but with Keycloak.
I want to deploy Keycloak in k8s cluster with kind and helm charts. And having such config in my Values.yaml:
keycloak:
basepath: auth
username: admin
password: password
extraEnv: |
- name: PROXY_ADDRESS_FORWARDING
value: "true"
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
path: /auth/?(.*)
hosts:
- keycloak.localtest.me
Where keycloak.localtest.me is in my dns hosts. Accessing nginx I see that keycloak configuration is present with *.keycloak.example.com
And in nginx log I see this:
Error getting SSL certificate "default/keycloak-tls": local SSL certificate default/keycloak-tls was not found
So when I go to keycloak.localtest.me nothing happens. How can I access Keycloack? Example http://keycloak.localtest.me/auth/realms/master?
UPDATE
#ArghyaSadhu pointed me to this:
$ kubectl get secret keycloak-tls -n ingress-nginx
Which gives me: Error from server (NotFound): secrets "keycloak-tls" not found
What can I do make this small example work? Do I need to put somewhere keycloak-tls or maybe disable it, but how can I do it?
I have a web app hosted in the Google Cloud platform that sits behind a load balancer, which itself sits behind an ingress. The ingress is set up with an SSL certificate and accepts HTTPS connections as expected, with one problem: I cannot get it to redirect non-HTTPS connections to HTTPS. For example, if I connect to it with the URL http://foo.com or foo.com, it just goes to foo.com, instead of https://foo.com as I would expect. Connecting to https://foo.com explicitly produces the desired HTTPS connection.
I have tried every annotation and config imaginable, but it stubbornly refuses, although it shouldn't even be necessary since docs imply that the redirect is automatic if TLS is specified. Am I fundamentally misunderstanding how ingress resources work?
Update: Is it necessary to manually install nginx ingress on GCP? Now that I think about it, I've been taking its availability on the platform for granted, but after coming across information on how to install nginx ingress on the Google Container Engine, I realized the answer may be a lot simpler than I thought. Will investigate further.
Kubernetes version: 1.8.5-gke.0
Ingress YAML file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: https-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- hosts:
- foo.com
secretName: tls-secret
rules:
- host: foo.com
http:
paths:
- path: /*
backend:
serviceName: foo-prod
servicePort: 80
kubectl describe ing https-ingress output
Name: https-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (10.56.0.3:8080)
TLS:
tls-secret terminates foo.com
Rules:
Host Path Backends
---- ---- --------
foo.com
/* foo-prod:80 (<none>)
Annotations:
force-ssl-redirect: true
secure-backends: true
ssl-redirect: true
Events: <none>
The problem was indeed the fact that the Nginx Ingress is not standard on the Google Cloud Platform, and needs to be installed manually - doh!
However, I found installing it to be much more difficult than anticipated (especially because my needs pertained specifically to GCP), so I'm going to outline every step I took from start to finish in hopes of helping anyone else who uses that specific cloud and has that specific need, and finds generic guides to not quite fit the bill.
Get Cluster Credentials
This is a GCP specific step that tripped me up for a while - you're dealing with it if you get weird errors like
kubectl unable to connect to server: x509: certificate signed by unknown authority
when trying to run kubectl commands. Run this to set up your console:
gcloud container clusters get-credentials YOUR-K8s-CLUSTER-NAME --z YOUR-K8S-CLUSTER-ZONE
Install Helm
Helm by itself is not hard to install, and the directions can be found on GCP's own docs; what they neglect to mention, however, is that on new versions of K8s, RBAC configuration is required to allow Tiller to install things. Run the following after helm init:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Install Nginx Ingress through Helm
Here's another step that tripped me up - rbac.create=true is necessary for the aforementioned RBAC factor.
helm install --name nginx-ingress-release stable/nginx-ingress --set rbac.create=true
Create your Ingress resource
This step is the simplest, and there are plenty of sample nginx ingress configs to tweak - see #JahongirRahmonov's example above. What you MUST keep in mind is that this step takes anywhere from half an hour to over an hour to set up - if you change the config and check again immediately, it won't be set up, but don't take that as implication that you messed something up! Wait for a while and see first.
It's hard to believe this is how much it takes just to redirect HTTP to HTTPS with Kubernetes right now, but I hope this guide helps anyone else stuck on such a seemingly simple and yet so critical need.
GCP has a default ingress controller which at the time of this writing cannot force https.
You need to explicitly manage an NGINX Ingress Controller.
See this article on how to do that on GCP.
Then add this annotation to your ingress:
kubernetes.io/ingress.allow-http: "false"
Hope it helps.