Using self-signed certificates in nginx Ingress - nginx

I'm migrating services into a kubernetes cluster on minikube, these services require a self-signed certificate on load, accessing the service via NodePort works perfectly and demands the certificate in the browser (picture below), but accessing via the ingress host (the domain is modified locally in /etc/hosts) provides me with a Kubernetes Ingress Controller Fake Certificate by Acme and skips my self-signed cert without any message.
The SSLs should be decrypted inside the app and not in the Ingress, and the tls-acme: "false" flag does not work and still gives me the fake cert
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# decryption of tls occurs in the backend service
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/tls-acme: "false"
spec:
rules:
- host: admin.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: admin-service
port:
number: 443
when signing in it should show the following before loading:
minikube version: v1.15.1
kubectl version: 1.19
using ingress-nginx 3.18.0

The problem turned out to be a bug on Minikube, and also having to enable ssl passthrough in the nginx controller (in addition to the annotation) with the flag --enable-ssl-passthrough=true.
I was doing all my cluster testing on a Minikube cluster version v1.15.1 with kubernetes v1.19.4 where ssl passthrough failed, and after following the guidance in the ingress-nginx GitHub issue, I discovered that the issue didn't replicate in kind, so I tried deploying my app on a new AWS cluster (k8 version 1.18) and everything worked great.

Related

accessing service form another pod Kubernetes

I have two services running in my k8s.
I am trying to access my wallet service from my user service but my curl cmd just returns 504 gateway timeout.
here is my ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dev-ingress
namespace: dev
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
# nginx.ingress.kubernetes.io/use-regex: "true"
# nginx.ingress.kubernetes.io/rewrite-target: /api/v1$uri
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /api/v1/wallet
pathType: Prefix
backend:
service:
name: wallet-service
port:
number: 80
- path: /api/v1/user
pathType: Prefix
backend:
service:
name: accounts-service
port:
number: 80
this is the way I passed the env in my account service.
http://wallet-service:3007
and I log the URL when hitting my endpoint
curl http://EXTERNAL_IP/api/v1/user/health/wallet
every other non-related endpoint works.
Any help is appreciated
I am running Azure Kubernetes
I have two services running in my k8s.
Do you have an actual K8S service?
apiVerison: ...
kind: Service
Check it with kubectl get svc -A and you should see your services there
Check to see that your pods are exposed to the services
kubectl get endpoints -A
Are you running on a cloud provider (GCP, Azure, AWS, etc), if so check your security configuration as well (NSG for Azure, a security policy for AWS, etc)
Check inner communication :
# Log into one of the pods
kubectl exec -n <namespace> <pod name> sh
# Try to connect to the other service with the FQDN
curl -sv <servicename>.<namespace>.svc.cluster.local
Update:
You commented that you are running on Azure, check the desired ports are opened under the NSG.
Assuming that you are running AKS you need to find out the MS_xxxx resource and your NSG group will be located under this resource group, edit it and open the desired ports
You are trying to connect to http://wallet-service:3007/api/v1/wallet/health - Where did you get the 3007 port?

Different hosts (Pods in Kubernetes) responds with different certificate for the same hostname

I have a werid problem - when asking for my internal hostname, xxx.home.arpa via e.g openssl s_client -connect xxx.home.arpa:443 one (example) pod
- image: docker.io/library/node:8.17.0-slim
name: node
args:
- "86400"
command:
- sleep
is getting response with DEFAULT NGINX INGRESS CERTIFICATE.
Other pod in the same namespace for the same command is getting response with my custom certificate.
Question:
Why one pod RECEIVES different cert for the same request?
For the purpose of this problem, please assume that cert-manager and certs should be properly configured - they are working in most of the system, it's only few pods that are misbehaving
Configuration: k8s nginx ingress, calico CNI, custom coredns svc which manages DNS responses (might be important?), my own CA authority.
e:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: ca-issuer
kubernetes.io/ingress.class: nginx
creationTimestamp: "2022-03-13T06:54:17Z"
generation: 1
name: gerrit-ingress
namespace: gerrit
resourceVersion: "739842"
uid: f22034ab-0ed8-4779-b01e-2738e6f63eb7
spec:
rules:
- host: gerrit.home.arpa
http:
paths:
- backend:
service:
name: gerrit-gerrit-service
port:
number: 80
pathType: ImplementationSpecific
tls:
- hosts:
- gerrit.home.arpa
secretName: gerrit-tls
status:
loadBalancer:
ingress:
- ip: 192.168.10.2
Most of the configuration (Except DNS) is up here.
As it turns out, my initial guesses were far off - particular container had a set of tools which were both configured to not send servername (Or not support SNI at all, which was the problem), specifically yarn:1.x and openssl:1.0.x.
The problem was with SNI of course, newer openssl or curl do use -servername by default satisfying SNI requirements.
To this I've considered two solutions:
Wildcard DNS for the clients that do not support SNI, which is easier but does not feel secure
TLS termination with reverse proxy allowing me to transparently use client with SNI support, which I haven't yet tried.
I went with wildcard DNS, though I don't feel that this should be done in prod. :)

Disable external authentication on Kubernetes Ingress

I run a bare-metal Kubernetes cluster and want to map services onto URLs instead of ports (I used NodePort so far).
To achieve this I tried to install an IngressController to be able to deploy Ingress objects containing routing.
I installed the IngressController via helm:
helm install my-ingress helm install my-ingress stable/nginx-ingress
and the deployment worked fine so far. To just use the node's domain name, I enabled hostNetwork: true in the nginx-ingress-controller.
Then, I created an Ingress deployment with this definition:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
which also deployed fine. Finally, when I try to access http://my-url.com/testpath I get a login-prompt. I nowhere set up login-credentials nor do I intend to do so as the services should be publicly available and/or handle authentication on their own.
How do I disable this behavior? I want to access the services just as I would use a NodePort solution.
To clarify the case I am posting answer (from comments area) as Community Wiki.
The problem here was not in configuration but in environment - there was running another ingress in the pod during Longhorn' deployment. This situation led to force basic authentication to both ones.
To resolve that problem it was necessary to to clean up all deployments.

Deploy Spring MVC application using GPC: Cloud SQL, Kubernetes (Service and Ingress) and HTTP(S) Load Balancer with Google Managed Certificate

Let me explain what the deployment consists of. First of all I created a Cloud SQL db by importing some data. To connect the db to the application I used cloud-sql-proxy and so far everything works.
I created a kubernetes cluster in which there is a pod containing the Docker container of the application that I want to depoly and so far everything works ... To reach the application in https then I followed several online guides (https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs#console , https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs#console , etc.), all converge on using a service and an ingress kubernetes. The first one maps the 8080 of spring to the 80 while the second one creates a load balacer that exposes a frontend in https. I configured a health-check, I created a certificate (google managed) associated to a domain which maps the static ip assigned to the ingress.
Apparently everything works but as soon as you try to reach from the browser the address https://example.org/ you are correctly redirected to the login page ( http://example.org/login ) but as you can see it switches to the HTTP protocol and obviously a 404 is returned by google since http is disabled. Forcing https on the address to which it redirects you then ( https://example.org/login ) for some absurd reason adds "www" in front of the domain name ( https://www.example.org/login ). If you try not to use the domain by switching to the static IP the www problem disappears... However, every time you make a request in HTTPS it keeps changing to HTTP.
P.S. the general goal would be to have http up to the load balancer (google's internal network) and then have https between the load balancer and the client.
Can anyone help me? If it helps I post the yaml file of the deployment. Thank you very much!
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: my-app # Label for the Deployment
name: my-app # Name of Deployment
spec:
minReadySeconds: 60 # Number of seconds to wait after a Pod is created and its status is Ready
selector:
matchLabels:
run: my-app
template: # Pod template
metadata:
labels:
run: my-app # Labels Pods from this Deployment
spec: # Pod specification; each Pod created by this Deployment has this specification
containers:
- image: eu.gcr.io/my-app/my-app-production:latest # Application to run in Deployment's Pods
name: my-app-production # Container name
# Note: The following line is necessary only on clusters running GKE v1.11 and lower.
# For details, see https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing#align_rollouts
ports:
- containerPort: 8080
protocol: TCP
- image: gcr.io/cloudsql-docker/gce-proxy:1.17
name: cloud-sql-proxy
command:
- "/cloud_sql_proxy"
- "-instances=my-app:europe-west6:my-app-cloud-sql-instance=tcp:3306"
- "-credential_file=/secrets/service_account.json"
securityContext:
runAsNonRoot: true
volumeMounts:
- name: my-app-service-account-secret-volume
mountPath: /secrets/
readOnly: true
volumes:
- name: my-app-service-account-secret-volume
secret:
secretName: my-app-service-account-secret
terminationGracePeriodSeconds: 60 # Number of seconds to wait for connections to terminate before shutting down Pods
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-app-health-check
spec:
healthCheck:
checkIntervalSec: 60
port: 8080
type: HTTP
requestPath: /health/check
---
apiVersion: v1
kind: Service
metadata:
name: my-app-svc # Name of Service
annotations:
cloud.google.com/neg: '{"ingress": true}' # Creates a NEG after an Ingress is created
cloud.google.com/backend-config: '{"default": "my-app-health-check"}'
spec: # Service's specification
type: ClusterIP
selector:
run: my-app # Selects Pods labelled run: neg-demo-app
ports:
- port: 80 # Service's port
protocol: TCP
targetPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-app-ing
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
ingress.gcp.kubernetes.io/pre-shared-cert: "example-org"
kubernetes.io/ingress.allow-http: "false"
spec:
backend:
serviceName: my-app-svc
servicePort: 80
tls:
- secretName: example-org
hosts:
- example.org
---
As I mention in the comment section, you can redirect HTTP to HTTPS.
Google Cloud have quite good documentation and you can find there step by step guides, including firewall configurations or tests. You can find this guide here.
I would also suggest you to read also docs like:
Traffic management overview for external HTTP(S) load balancers
Setting up traffic management for external HTTP(S) load balancers
Routing and traffic management
As alternative you could check Nginx Ingress with proper annotation (force-ssl-redirect). Some examples can be found here.

How to create tls certs which I can use in kubernetes?

I'm facing a trouble that I need certs for my Keycloak inside k8s cluster to use nginx ingress. Which is the easiest way to add them?
I started like this:
kubectl create secret tls tls-keycloak-ingress --cert=localtest.me.crt --key=localtest.me.pem
And then include them via secret in Chart yaml:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
path: /auth/?(.*)
hosts:
- keycloak.localtest.me
tls:
- hosts:
- keycloak.localtest.me
secretName: tls-keycloak-ingress
But should I create them on host machine? Or with kubectl somehow?
Typically solved by adding a cert-manager into the cluster. It then tracks all ingress objects tls sections and issues using the provided LE account:
https://cert-manager.io/docs/tutorials/acme/ingress/
It's not only issues and stores cert to an appropriate secret, but also renews automatically.
NOTE: if you are using helm3 skip tiller step.

Resources