Airflow prefix resulting in 404 EKS Fargate - airflow

Here is my set up: I have EKS where I have deployed Airflow using Helm chart. The pods are running on Fargate. I have ALB installed using helm as well.
When I set up the config using
web:
path: "/"
host: "k8s-dev-***************************.us-west-2.elb.amazonaws.com"
The UI shows up when accessing the Ingress URL. The issue comes when I am setting up a prefix:
ingress:
enabled: true
apiVersion: networking.k8s.io/v1
web:
path: "/airflow-dev"
host: "k8s-dev-*********************.us-west-2.elb.amazonaws.com"
In the Airflow values as well, I am passing the following overrides to the pods:
extraEnv: |
- name: "AIRFLOW_VAR_FOO"
value: "Develp_foo"
- name: "AIRFLOW__WEBSERVER__BASE_URL"
value: "http://localhost:8080/"
Here is part of the Ingress yaml file:
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /airflow-dev/
pathType: Prefix
backend:
service:
name: airflow-dev-nodeport
port:
number: 8080`
If I change the Base_Url value to locahost:8080/airflow-dev/ the webserver Pod keeps restarting. Any idea why I am getting the 404 page when accessing the Ingress URL ?
Here is the error that I get:
Ideally I want to access the Airflow using the ingress_url/airflow-dev/ .
Thanks

Related

Nginx Ingress Error 413 Request Entity Too Large

I use ingress-nginx with Helm Chart. I used to have the problem, that when I would upload a file (50MB) that I would get the error 413 Request Entity Too Large nginx.
So I changed the proxy-body-size value in my values.yaml file to 150m, so I should now be able to upload my file.
But now I get the error "413 Request Entity Too Large openresty/1.13.6.2".
I checked the nginx.conf file on the ingress controller and the value for client_max_body_size is correctly set to 150m.
After some research I found out that openresty is used by the lua module in nginx.
Does anybody know how I can set this setting too for openresty, or what parameter I am missing ?
My current config is the following:
values.yml:
ingress-nginx:
defaultBackend:
nodeSelector:
beta.kubernetes.io/os: linux
controller:
replicaCount: 2
resources:
requests:
cpu: 1
memory: 4Gi
limits:
cpu: 2
memory: 7Gi
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 90
targetMemoryUtilizationPercentage: 90
ingressClassResource:
name: nginx
controllerValue: "k8s.io/nginx"
nodeSelector:
beta.kubernetes.io/os: linux
admissionWebhooks:
enabled: false
patch:
nodeSelector:
beta.kubernetes.io/os: linux
extraArgs:
ingress-class: "nginx"
config:
proxy-buffer-size: "16k"
proxy-body-size: "150m"
client-body-buffer-size: "128k"
large-client-header-buffers: "4 32k"
ssl-redirect: "false"
use-forwarded-headers: "true"
compute-full-forwarded-for: "true"
use-proxy-protocol: "false"
ingress.yml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
namespace: namespacename
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
nginx.ingress.kubernetes.io/proxy-buffers-number: "8"
nginx.ingress.kubernetes.io/client-body-buffer-size: "128k"
nginx.ingress.kubernetes.io/proxy-body-size: "150m"
spec:
tls:
- hosts:
- hostname
rules:
- host: hostname
http:
paths:
- path: /assets/static/
pathType: ImplementationSpecific
backend:
service:
name: servicename
port:
number: 8080
So it turns out the Application wich had the error, had another reverse Proxy infront of it (wich uses Lua and Openresty for oauth registration).
The Proxy-body-size attribute needed to be raised there to. After that the File upload worked

Cant Access Kibana URL: This Kibana installation has strict security requirements enabled that your current browser does not meet

am not able to access kibana from browser. getting the below error when i do curl to kibana. kibana is accessed via ingress controller.
curl xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.elb.ap-south-1.amazonaws.com/app/kibana
<div class="kibanaWelcomeLogo"></div></div></div><h2 class="kibanaWelcomeTitle">Please upgrade your browser</h2><div class="kibanaWelcomeText">This Kibana installation has strict security requirements enabled that your current browser does not meet.</div></div><script>
// Since this is an unsafe inline script, this code will not run
// in browsers that support content security policy(CSP). This is
// intentional as we check for the existence of __kbnCspNotEnforced__ in
// bootstrap.
window.__kbnCspNotEnforced__ = true;
</script><script src="/bundles/app/kibana/bootstrap.js"></script></body></html>root#10:~/EK/work#
kibana ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kibana
namespace: logging-od
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /app/kibana
backend:
serviceName: logging-kibana
servicePort: 5601
using kubectl proxy forward to kibana service works without any issues
kubectl -n logging port-forward svc/kibana --address 0.0.0.0 8088:5601
looked at ingress controller logs but it goes through fine.
10.224.91.15 - - [04/Mar/2021:05:12:35 +0000] "GET /app/kibana HTTP/1.1" 200 75425 "-" "curl/7.47.0" 152 0.019 [logging-od-logging-kibana-5601] [] 100.64.131.52:5601 75425 0.016 200 429c46c4006caefa2a160018cca3195d
any idea
Go to conf/kibana.yml, and try to set csp.strict: false
But make sure this is not done with production instance

app on path instead of root not working for Kubernetes Ingress

I have an issue at work with K8s Ingress and I will use fake examples here to illustrate my point.
Assume I have an app called Tweeta and my company is called ABC. My app currently sits on tweeta.abc.com.
But we want to migrate our app to app.abc.com/tweeta.
My current ingress in K8s is as belows:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tweeta-ingress
spec:
rules:
- host: tweeta.abc.com
http:
paths:
- path: /
backend:
serviceName: tweeta-frontend
servicePort: 80
- path: /api
backend:
serviceName: tweeta-backend
servicePort: 80
For migration, I added a second ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tweeta-ingress-v2
spec:
rules:
- host: app.abc.com
http:
paths:
- path: /tweeta
backend:
serviceName: tweeta-frontend
servicePort: 80
- path: /tweeta/api
backend:
serviceName: tweeta-backend
servicePort: 80
For sake of continuity, I would like to have 2 ingresses pointing to my services at the same time. When the new domain is ready and working, I would just need to tear down the old ingress.
However, I am not getting any luck with the new domain with this ingress. Is it because it is hosted on a path and the k8s ingress needs to host on root? Or is it a configuration I would need to do on the nginx side?
As far as I tried, I couldn't reproduce your problem. So I decided to describe how I tried to reproduce it, so you can follow the same steps and depending on where/if you fail, we can find what is causing the issue.
First of all, make sure you are using a NGINX Ingress as it's more powerful.
I installed my NGINX Ingress using Helm following these steps:
$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com
$ helm repo update
$ helm install nginx-ingress stable/nginx-ingress
For the deployment, we are going to use an example from here.
Deploy a hello, world app
Create a Deployment using the following command:
kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0
Output:
deployment.apps/web created
Expose the Deployment:
kubectl expose deployment web --type=NodePort --port=8080
Output:
service/web exposed
Create Second Deployment
Create a v2 Deployment using the following command:
kubectl create deployment web2 --image=gcr.io/google-samples/hello-app:2.0
Output:
deployment.apps/web2 created
Expose the Deployment:
kubectl expose deployment web2 --port=8080 --type=NodePort
Output:
service/web2 exposed
It this point we have the Deployments and Services running:
$ kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
web 1/1 1 1 24m
web2 1/1 1 1 22m
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d5h
nginx-ingress-controller LoadBalancer 10.111.183.151 <pending> 80:31974/TCP,443:32396/TCP 54m
nginx-ingress-default-backend ClusterIP 10.104.30.84 <none> 80/TCP 54m
web NodePort 10.102.38.233 <none> 8080:31887/TCP 24m
web2 NodePort 10.108.203.191 <none> 8080:32405/TCP 23m
For the ingress, we are going to use the one provided in the question but we have to change the backends:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tweeta-ingress
spec:
rules:
- host: tweeta.abc.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 8080
- path: /api
backend:
serviceName: web2
servicePort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tweeta-ingress-v2
spec:
rules:
- host: app.abc.com
http:
paths:
- path: /tweeta
backend:
serviceName: web
servicePort: 8080
- path: /tweeta/api
backend:
serviceName: web2
servicePort: 8080
Now let's test our ingresses:
$ curl tweeta.abc.com
Hello, world!
Version: 1.0.0
Hostname: web-6785d44d5-j8bgk
$ curl tweeta.abc.com/api
Hello, world!
Version: 2.0.0
Hostname: web2-8474c56fd-lx55n
$ curl app.abc.com/tweeta
Hello, world!
Version: 1.0.0
Hostname: web-6785d44d5-j8bgk
$ curl app.abc.com/tweeta/api
Hello, world!
Version: 2.0.0
Hostname: web2-8474c56fd-lx55n
As can be seen, everything is working fine with no mods in your ingresses.
I assume your frontend Pod expects the path / and backend Pod expects the path /api
The first ingress config doesn't transform the request and it goes to the frontend(Fpod)/backend(Bpod) Pods as is:
http://tweeta.abc.com/ -> ingress -> svc -> Fpod: [ http://tweeta.abc.com/ ]
http://tweeta.abc.com/api -> ingress -> svc -> Bpod: [ http://tweeta.abc.com/api ]
but with second ingress it doesn't work as expected:
http://app.abc.com/tweeta -> ingress -> svc -> Fpod: [ http://app.abc.com/tweeta ]
http://app.abc.com/tweeta/api -> ingress -> svc -> Bpod: [ http://app.abc.com/tweeta/api ]
The Pod request path is changed from / to /tweeta and from /api to /tweeta/api. I guess it's not the expected behavior. Usually application in Pods doesn't care about Host header but Path must be correct. If your Pods aren't designed to respond to additional tweeta\ path, they likely respond with 404 (Not Found) when the second ingress is used.
To fix it you have to add rewrite annotation to remove tweeta path from the Pods' request:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tweeta-ingress-v2
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: app.abc.com
http:
paths:
- path: /tweeta(/|$)(.*)
backend:
serviceName: tweeta-frontend
servicePort: 80
- path: /tweeta(/)(api$|api/.*)
backend:
serviceName: tweeta-backend
servicePort: 80
The result will be as follows, which is exactly how it suppose to work:
http://app.abc.com/tweeta -> ingress -> svc -> Fpod: [ http://app.abc.com/ ]
http://app.abc.com/tweeta/blabla -> ingress -> svc -> Fpod: [ http://app.abc.com/blabla ]
http://app.abc.com/tweeta/api -> ingress -> svc -> Bpod: [ http://app.abc.com/api ]
http://app.abc.com/tweeta/api/blabla -> ingress -> svc -> Bpod: [ http://app.abc.com/api/blabla ]
To check ingress-controller logs and configuration use accordingly:
$ kubectl logs -n ingress-controller-namespace ingress-controller-pods-name
$ kubectl exec -it -n ingress-controller-namespace ingress-controller-pods-name -- cat /etc/nginx/nginx.conf > local-file-name.txt && less local-file-name.txt

Error when access to Nextcloud in Kubernetes

My goal is :
create a pod with Nextcloud
create a service to access this pod
from another machine with nginx route a CNAME to the service
I tried to deploy a pod with Nextcloud and a service to access it but actually I can't access it. I have an error :
message ERR_SSL_PROTOCOL_ERROR.
I just followed a tutorial at the beginning but I didn't want to use nginx like it was explained because I have it on another machine.
When I look at pods (nextcloud + db) and services they look ok but I have no response when I try to access nextcloud.
(nc = nextcloud)
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nc
name: nc
spec:
replicas: 1
selector:
matchLabels:
app: nc
strategy:
type: Recreate
template:
metadata:
labels:
app: nc
spec:
containers:
- env:
- name: DEBUG
value: "false"
- name: NEXTCLOUD_URL
value: http://test.fr
- name: NEXTCLOUD_ADMIN_USER
value: admin
- name: NEXTCLOUD_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: NEXTCLOUD_ADMIN_PASSWORD
- name: NEXTCLOUD_UPLOAD_MAX_FILESIZE
value: 4G
- name: NEXTCLOUD_MAX_FILE_UPLOADS
value: "20"
- name: MYSQL_DATABASE
value: nextcloud
- name: MYSQL_HOST
value: mariadb
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_USER
value: nextcloud
name: nc
image: nextcloud
ports:
- containerPort: 80
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/www/html
name: vnextcloud
subPath: html
- mountPath: /var/www/html/custom_apps
name: vnextcloud
subPath: apps
- mountPath: /var/www/html/config
name: vnextcloud
subPath: config
- mountPath: /var/www/html/data
name: vimages
subPath: imgnc
- mountPath: /var/www/html/themes
name: vnextcloud
subPath: themes
restartPolicy: Always
volumes:
- name: vnextcloud
persistentVolumeClaim:
claimName: nfs-pvcnextcloud
- name: vimages
persistentVolumeClaim:
claimName: nfs-pvcimages
For creating the service I use this command line :
kubectl expose deployment nc --type=NodePort --name=svc-nc --port 80
And to access my nextcloud I tried the address #IP_MASTER:32500
My questions are:
How to check if a pod is working well ?to know if the problem is coming from the service or the pod
What should I do to have access to my nextcloud ?I didn't do the tuto part "Create self-signed certificates" because I don't know how to manage. Should it be on my other Linux machine or in my Kubernetes Cluster
1. Please consider using stable nextcloud helm chart
2. This tutorial is a little outdated and can be found also here
In kubernetes 1.16 release you should change in all your deployments apiVersion to apiVersion: apps/v1 please take a look at Deprecations and Removals.
In addition you should get an error ValidationError(Deployment.spec): missing required field "selector" so please add selectors in your deployment under Deployment.spec like:
selector:
matchLabels:
app: db
3. Finally Create self-signed certificates. this repo is using OMGWTFSSL - Self Signed SSL Certificate Generator. Once you provide necessary information like server name, path to your local hostpath and names for your SSL certificates it will be automatically created after one pod-run under specified hostpath:
volumes:
- name: certs
hostPath:
path: "/home/<someFolderLocation>/certs-pv"
those information should be re-used in the section Nginx reverse Proxy for nginx.conf
4. In your nc-svc.yaml you can change the service type to the type: NodePort
5. How to verify if your sercie is working properly:
kubectl get pods,svc,ep -o wide
Pods:
pod/nc-6d8694659d-5przx 1/1 Running 0 15m 10.244.0.6
Svc:
service/svc-nc NodePort 10.102.90.88 <none> 80:32500/TCP
Endpoints:
endpoints/svc-nc 10.244.0.6:80
You can test your service from inside the cluster running separate pod (f.e. ubuntu)
curl your_svc_name
you can verify if service discovery is working properly:
cat /etc/resolv.conf
nslokup svc_your_svc_name (your_svc_name.default.svc.cluster.local)
From outside the cluster using NodePort:
curl NODE_IP:NODE_PORT ( if not please verify your firewall rules)
Once you provided hostname for your nextcloud service you should use
curl -vH 'Host:specified_hostname' http://external_ip/ (using http or https according to your configuration)
In addition you can exec directly into your db pod
kuebctl exec -it db_pod -- /bin/bash and run
mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD
mysqlshow -uroot -p$MYSQL_ROOT_PASSWORD --status nextcloud
6. What should I do to have access to my nextcloud ?
I didn't do the tuto part "Create self-signed certificates" because I don't know how to manage.
7. As described under point 3.
8. This part is not clear to me: from another machine with nginx route a CNAME to the service
Please refer to:
An ExternalName Service is a special case of Service that does not have selectors and uses DNS names instead.
Additional resources:
Expose your Kubernetes service from your own custom domains
What’s the difference between a CNAME and a Web Redirect?
Hope this help.

Let's Encrypt kubernetes Ingress Controller issuing Fake Certificate

Not Sure why I'm getting Fake certificate, even the certificate is properly issued by Let's Encrypt using certmanager
The setup is running on the Alibaba Cloud ECS console, where one Kube-master and one cube-minion form a Kubernetes cluster.
Service Details
root#kube-master:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h20m
my-nginx ClusterIP 10.101.150.247 <none> 80/TCP 77m
Pod Details
root#kube-master:~# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
my-nginx-6cc48cd8db-n6scm 1/1 Running 0 46s app=my-nginx,pod-template-hash=6cc48cd8db
Helm Cert-manager deployed
root#kube-master:~# helm ls
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
cert-manager 1 Tue Mar 12 15:29:21 2019 DEPLOYED cert-manager-v0.5.2 v0.5.2 kube-system
kindred-garfish 1 Tue Mar 12 17:03:41 2019 DEPLOYED nginx-ingress-1.3.1 0.22.0 kube-system
Certificate Issued Properly
root#kube-master:~# kubectl describe certs
Name: tls-prod-cert
Namespace: default
Labels: <none>
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Creation Timestamp: 2019-03-12T10:26:58Z
Generation: 2
Owner References:
API Version: extensions/v1beta1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: nginx-ingress-prod
UID: 5ab11929-44b1-11e9-b431-00163e005d19
Resource Version: 17687
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/tls-prod-cert
UID: 5dad4740-44b1-11e9-b431-00163e005d19
Spec:
Acme:
Config:
Domains:
zariga.com
Http 01:
Ingress:
Ingress Class: nginx
Dns Names:
zariga.com
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-prod
Secret Name: tls-prod-cert
Status:
Acme:
Order:
URL: https://acme-v02.api.letsencrypt.org/acme/order/53135536/352104603
Conditions:
Last Transition Time: 2019-03-12T10:27:00Z
Message: Order validated
Reason: OrderValidated
Status: False
Type: ValidateFailed
Last Transition Time: <nil>
Message: Certificate issued successfully
Reason: CertIssued
Status: True
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateOrder 27s cert-manager Created new ACME order, attempting validation...
Normal IssueCert 27s cert-manager Issuing certificate...
Normal CertObtained 25s cert-manager Obtained certificate from ACME server
Normal CertIssued 25s cert-manager Certificate issued successfully
Ingress Details
root#kube-master:~# kubectl describe ingress
Name: nginx-ingress-prod
Namespace: default
Address:
Default backend: my-nginx:80 (192.168.123.202:80)
TLS:
tls-prod-cert terminates zariga.com
Rules:
Host Path Backends
---- ---- --------
* * my-nginx:80 (192.168.123.202:80)
Annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: true
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 7m13s nginx-ingress-controller Ingress default/nginx-ingress-prod
Normal CreateCertificate 7m8s cert-manager Successfully created Certificate "tls-prod-cert"
Normal UPDATE 6m57s nginx-ingress-controller Ingress default/nginx-ingress-prod
Letsencrypt Nginx Production Definition
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-prod
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/tls-acme: 'true'
labels:
app: 'my-nginx'
spec:
backend:
serviceName: my-nginx
servicePort: 80
tls:
- secretName: tls-prod-cert
hosts:
- zariga.com
Maybe would be helpful for someone experiencing similar issues. As for me, a forgot to specify hostname in Ingress yaml file for both rules and tls sections.
After duplicating the hostname, it started responding with a proper certificate.
Example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-web-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- my.host.com # <----
secretName: tls-secret
rules:
- host: my.host.com # <----
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: my-nginx
servicePort: 80
Sometimes it may happen if you are using the clusterissuer URL as staging URL.
Check the letsencrypt url set in you issuer.yaml or clusterissuer.yaml and change it to production url: https://acme-v02.api.letsencrypt.org/directory
I faced the same issue once and changing the url to production url solved it.
Also check that the ingress tls secrets you are using is right.
Actual cluster issuer should be something like for production :
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: dev-clusterissuer
spec:
acme:
email: harsh#example.com
privateKeySecretRef:
name: dev-clusterissuer
server: https://acme-v02.api.letsencrypt.org/directory # <----check this server URL it is for Prod and use this only
solvers:
- http01:
ingress:
class: nginx
If you are using server: https://acme-staging-v02.api.letsencrypt.org/directory you will face issue better replace it with server: https://acme-v02.api.letsencrypt.org/directory
If you're convinced that everything is set up correctly and it still doesn't work, try this.
Edit the deployment of your nginx-controller. Why? Because, if it doesn't find the secret in the namespace it's deployed in, the Nginx controller deploys it's own certificate (fake certificate). Not knowing this (I'm new to the game) cost me a few days of my life.
So, either change to the namespace where your Nginx Ingress controller is and get the name of the deployment, then:
kubectl edit deployment nginx-ingress-ingress-nginx-controller -n nginx-ingress
Or if there is only one deployment in that namespace you can just do
kubectl edit deployment
And you should be in edit mode for your nginx controller deployment. Look for the section: spec --> containers: --> args:
spec:
containers:
- args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/nginx-ingress-ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/nginx-ingress-ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --default-ssl-certificate=app-namespace/letsencrypt-cert-prod
You can add a default certificate to use if your nginx controller doesn't find one (as I have above), so it will search in a namespace for a secret by adding:
--default-ssl-certificate=your-cert-namespace/your-cert-secret
your-cert-namespace: The namespace where your certificate secret is
your-cert-secret: The name of your certificate containing secret
Once you save and close your editor, it should be updated. Then check the logs of your cert manager pod:
kubectl logs cert-manager-xxxpodxx-abcdef -n cert-manager
To make sure that things are working as normal.
You probably won't have this issue if all your resources are deployed in the same namespace.
Important to note that the ClusterIssuer spec for solvers changed. For people using cer-manager>0.7.2, this comment saved me so much time: https://github.com/jetstack/cert-manager/issues/1650#issuecomment-518953464. Specially on how to configure the ClusterIssuer and Certificate.
In my case, the problem was accessing the domain at wrong port, my default https port wasn't 443 but 4443
For me, the issue was that I forgot to kubectl apply the secret (in my case 'tls-secret.yml'). When deploying K8S manually, such an error is rarely made. However, I'm using gitlab CICD to deploy applications, and I forgot to add - kubectl apply -f ./kube/secret to my .gitlab-ci.yml.
In my case i mistyped the name of my tls secret inside my ingress rules.
instead of secretName: my-homepage-tls i typed secretName: myy-homepage-tls
For me, the issue was ingress class name, since I'm using microk8s, ingress class name is public:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: "your#email.tld"
privateKeySecretRef:
name: letsencrypt-prod
server: "https://acme-v02.api.letsencrypt.org/directory"
solvers:
- http01:
ingress:
class: public
This happened to me today, I had 2 ingresses in the same namespace and used letsencrypt-prod as the secret name for both. One worked, the other didn't. The secrets are auto-generated and needed to have a unique name to avoid clashing

Resources