I'm trying to follow up this example https://github.com/oauth2-proxy/oauth2-proxy/tree/master/contrib/local-environment/kubernetes but with Keycloak.
I want to deploy Keycloak in k8s cluster with kind and helm charts. And having such config in my Values.yaml:
keycloak:
basepath: auth
username: admin
password: password
extraEnv: |
- name: PROXY_ADDRESS_FORWARDING
value: "true"
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
path: /auth/?(.*)
hosts:
- keycloak.localtest.me
Where keycloak.localtest.me is in my dns hosts. Accessing nginx I see that keycloak configuration is present with *.keycloak.example.com
And in nginx log I see this:
Error getting SSL certificate "default/keycloak-tls": local SSL certificate default/keycloak-tls was not found
So when I go to keycloak.localtest.me nothing happens. How can I access Keycloack? Example http://keycloak.localtest.me/auth/realms/master?
UPDATE
#ArghyaSadhu pointed me to this:
$ kubectl get secret keycloak-tls -n ingress-nginx
Which gives me: Error from server (NotFound): secrets "keycloak-tls" not found
What can I do make this small example work? Do I need to put somewhere keycloak-tls or maybe disable it, but how can I do it?
Related
I've installed an apache nifi secured cluster with the helm-nifi chart (with single user authorization).
When I do port-forworad to my pc and access the https://localhost:8443 I can login nifi without issues and I can see my cluster.
But - when I access nifi via my ingress url (nifi.dev-tools.mycompany.com) and trying to login I get an error:
Inside the pod I can see this error in nifi-user.log:
Caused by: org.springframework.security.oauth2.jwt.BadJwtException: An error occurred while attempting to decode the Jwt: Signed JWT rejected: Another algorithm expected, or no matching key(s) found
at org.springframework.security.oauth2.jwt.NimbusJwtDecoder.createJwt(NimbusJwtDecoder.java:180)
at org.springframework.security.oauth2.jwt.NimbusJwtDecoder.decode(NimbusJwtDecoder.java:137)
at org.springframework.security.oauth2.server.resource.authentication.JwtAuthenticationProvider.getJwt(JwtAuthenticationProvider.java:97)
... 104 common frames omitted
Caused by: com.nimbusds.jose.proc.BadJOSEException: Signed JWT rejected: Another algorithm expected, or no matching key(s) found
at com.nimbusds.jwt.proc.DefaultJWTProcessor.process(DefaultJWTProcessor.java:357)
at com.nimbusds.jwt.proc.DefaultJWTProcessor.process(DefaultJWTProcessor.java:303)
at org.springframework.security.oauth2.jwt.NimbusJwtDecoder.createJwt(NimbusJwtDecoder.java:154)
... 106 common frames omitted
My relevant values are:
replicaCount: 3
externalSecure: true
isNode: true
externalSecure: true
auth:
singleUser:
username: username
password: changemechangeme
certManager:
enabled: true
clusterDomain: cluster.local
keystorePasswd: changeme
truststorePasswd: changeme
replaceDefaultTrustStore: true
additionalDnsNames:
- localhost
- nifi.dev-tools.mycompany.com
ingress:
enabled: true
# className: nginx
annotations:
nginx.ingress.kubernetes.io/upstream-vhost: "localhost:8443"
nginx.ingress.kubernetes.io/proxy-redirect-from: "https://localhost:8443"
nginx.ingress.kubernetes.io/proxy-redirect-to: "https://nifi.dev-tools.mycompany.com"
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
tls:
- hosts:
- nifi.dev-tools.mycompany.com
secretName: nifi-ca
hosts:
- nifi.dev-tools.mycompany.com
path: /
When I check the tls in my ingress url I can see is not the nifi-ca tls, but my default ingress tls:
In the localhost is:
So I guess it's related... how can I solve it?
I couldn't make it work with single username, but it's working me with keycloak as user management.
After I deployed keycloak in my cluster I configures the values.yaml to:
oidc:
enabled: true
discoveryUrl: http://keycloack.mycompany.com/realms/nifi/.well-known/openid-configuration
clientId: nifi
clientSecret: mysecret
claimIdentifyingUser: email
admin: myuser#mycompany.com
## Request additional scopes, for example profile
additionalScopes:
And to make it work need also to update the ingress settings (inside the values.yaml) and add the following annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "hello-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "1728000"
nginx.ingress.kubernetes.io/session-cookie-max-age: "1728000"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
Now I can login the secured cluster with the user that I configured in the keycloak.
we are using aks k8s for our application
we have installed ssl certs as secrets .we have ngnix-ingress in separate name space .
once I applied certificates , I am getting 404 Not Found from nginx,
from nginx side I verified everything , controller reloaded with new configarations .
but I am not getting home page , any Idea on this issue
Ref : nginx Ingress installtion link
with curl I am able to get proper certificate installed
curl -v -k --resolve azx-devops-monitoring.aaa.com:443:10.11.6.100 https://azx-devops-monitoring.aaa.com
but with browser once I tried I am getting 404 not found and the certificate also nginx fake certificate .
url i tried
https://azx-devops-monitoring.aaa.com
https://10.11.6.100
kubectl describe ingress st2-ingress -n st2
Name: st2-ingress
Namespace: st2
Address: 10.ab.6.xyz
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/ stackstormha-st2web:80 (10.xxx.1.18x:80,10.xxx.3.18y:80)
Annotations: kubernetes.io/ingress.class: nginx
Events: <none>
kubectl describe secret aks-ingress-tls -n st2
Name: aks-ingress-tls
Namespace: st2
Labels: <none>
Annotations: <none>
Type: kubernetes.io/tls
I'm migrating services into a kubernetes cluster on minikube, these services require a self-signed certificate on load, accessing the service via NodePort works perfectly and demands the certificate in the browser (picture below), but accessing via the ingress host (the domain is modified locally in /etc/hosts) provides me with a Kubernetes Ingress Controller Fake Certificate by Acme and skips my self-signed cert without any message.
The SSLs should be decrypted inside the app and not in the Ingress, and the tls-acme: "false" flag does not work and still gives me the fake cert
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# decryption of tls occurs in the backend service
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/tls-acme: "false"
spec:
rules:
- host: admin.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: admin-service
port:
number: 443
when signing in it should show the following before loading:
minikube version: v1.15.1
kubectl version: 1.19
using ingress-nginx 3.18.0
The problem turned out to be a bug on Minikube, and also having to enable ssl passthrough in the nginx controller (in addition to the annotation) with the flag --enable-ssl-passthrough=true.
I was doing all my cluster testing on a Minikube cluster version v1.15.1 with kubernetes v1.19.4 where ssl passthrough failed, and after following the guidance in the ingress-nginx GitHub issue, I discovered that the issue didn't replicate in kind, so I tried deploying my app on a new AWS cluster (k8 version 1.18) and everything worked great.
I'm facing a trouble that I need certs for my Keycloak inside k8s cluster to use nginx ingress. Which is the easiest way to add them?
I started like this:
kubectl create secret tls tls-keycloak-ingress --cert=localtest.me.crt --key=localtest.me.pem
And then include them via secret in Chart yaml:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
path: /auth/?(.*)
hosts:
- keycloak.localtest.me
tls:
- hosts:
- keycloak.localtest.me
secretName: tls-keycloak-ingress
But should I create them on host machine? Or with kubectl somehow?
Typically solved by adding a cert-manager into the cluster. It then tracks all ingress objects tls sections and issues using the provided LE account:
https://cert-manager.io/docs/tutorials/acme/ingress/
It's not only issues and stores cert to an appropriate secret, but also renews automatically.
NOTE: if you are using helm3 skip tiller step.
I have a web app hosted in the Google Cloud platform that sits behind a load balancer, which itself sits behind an ingress. The ingress is set up with an SSL certificate and accepts HTTPS connections as expected, with one problem: I cannot get it to redirect non-HTTPS connections to HTTPS. For example, if I connect to it with the URL http://foo.com or foo.com, it just goes to foo.com, instead of https://foo.com as I would expect. Connecting to https://foo.com explicitly produces the desired HTTPS connection.
I have tried every annotation and config imaginable, but it stubbornly refuses, although it shouldn't even be necessary since docs imply that the redirect is automatic if TLS is specified. Am I fundamentally misunderstanding how ingress resources work?
Update: Is it necessary to manually install nginx ingress on GCP? Now that I think about it, I've been taking its availability on the platform for granted, but after coming across information on how to install nginx ingress on the Google Container Engine, I realized the answer may be a lot simpler than I thought. Will investigate further.
Kubernetes version: 1.8.5-gke.0
Ingress YAML file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: https-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- hosts:
- foo.com
secretName: tls-secret
rules:
- host: foo.com
http:
paths:
- path: /*
backend:
serviceName: foo-prod
servicePort: 80
kubectl describe ing https-ingress output
Name: https-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (10.56.0.3:8080)
TLS:
tls-secret terminates foo.com
Rules:
Host Path Backends
---- ---- --------
foo.com
/* foo-prod:80 (<none>)
Annotations:
force-ssl-redirect: true
secure-backends: true
ssl-redirect: true
Events: <none>
The problem was indeed the fact that the Nginx Ingress is not standard on the Google Cloud Platform, and needs to be installed manually - doh!
However, I found installing it to be much more difficult than anticipated (especially because my needs pertained specifically to GCP), so I'm going to outline every step I took from start to finish in hopes of helping anyone else who uses that specific cloud and has that specific need, and finds generic guides to not quite fit the bill.
Get Cluster Credentials
This is a GCP specific step that tripped me up for a while - you're dealing with it if you get weird errors like
kubectl unable to connect to server: x509: certificate signed by unknown authority
when trying to run kubectl commands. Run this to set up your console:
gcloud container clusters get-credentials YOUR-K8s-CLUSTER-NAME --z YOUR-K8S-CLUSTER-ZONE
Install Helm
Helm by itself is not hard to install, and the directions can be found on GCP's own docs; what they neglect to mention, however, is that on new versions of K8s, RBAC configuration is required to allow Tiller to install things. Run the following after helm init:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Install Nginx Ingress through Helm
Here's another step that tripped me up - rbac.create=true is necessary for the aforementioned RBAC factor.
helm install --name nginx-ingress-release stable/nginx-ingress --set rbac.create=true
Create your Ingress resource
This step is the simplest, and there are plenty of sample nginx ingress configs to tweak - see #JahongirRahmonov's example above. What you MUST keep in mind is that this step takes anywhere from half an hour to over an hour to set up - if you change the config and check again immediately, it won't be set up, but don't take that as implication that you messed something up! Wait for a while and see first.
It's hard to believe this is how much it takes just to redirect HTTP to HTTPS with Kubernetes right now, but I hope this guide helps anyone else stuck on such a seemingly simple and yet so critical need.
GCP has a default ingress controller which at the time of this writing cannot force https.
You need to explicitly manage an NGINX Ingress Controller.
See this article on how to do that on GCP.
Then add this annotation to your ingress:
kubernetes.io/ingress.allow-http: "false"
Hope it helps.