Nginx-Ingress not picking up certificate from cert-manager - nginx

I'm currently trying to setup an application in K8S behind an nginx-ingress. The Certs should be generated by cert-manager and Let's Encrypt (Staging for now).
The application is in namespace prod, nginx-ingress-controller in namespace nginx and cert-manager lives in cert-manager namespace.
We setup a ClusterIssuer for Let's Encrypt staging and successfully generated a certificate (we can see it in the secrets and certificate resource). However, nginx-ingress-controller is still answering with the Kubernetes Ingress Controller Fake Certificate.
Here are some technical details:
Ingress
❯ kubectl describe ingress/forgerock
Name: forgerock
Labels: <none>
Namespace: prod
Address: someaws-id.elb.eu-central-1.amazonaws.com
Ingress Class: <none>
Default backend: <default>
TLS:
sslcertciam terminates ciam.test.fancycorp.com
Rules:
Host Path Backends
---- ---- --------
ciam.test.fancycorp.com
/am/json/authenticate am:80 (10.0.2.210:8081)
...
/am/extlogin am:80 (10.0.2.210:8081)
Annotations: cert-manager.io/cluster-issuer: letsencrypt-stage
haproxy.router.openshift.io/cookie_name: route
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/body-size: 64m
nginx.ingress.kubernetes.io/enable-cors: false
nginx.ingress.kubernetes.io/proxy-body-size: 64m
nginx.ingress.kubernetes.io/proxy-buffer-size: 16k
nginx.ingress.kubernetes.io/proxy-read-timeout: 600
nginx.ingress.kubernetes.io/proxy-send-timeout: 600
nginx.ingress.kubernetes.io/send-timeout: 600
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
nginx.ingress.kubernetes.io/session-cookie-name: route
nginx.ingress.kubernetes.io/ssl-redirect: true
Events: <none>
Issuer:
❯ kubectl describe clusterissuer/letsencrypt-stage
Name: letsencrypt-stage
Namespace:
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1
Kind: ClusterIssuer
Metadata:
Creation Timestamp: 2022-09-12T07:26:05Z
Generation: 1
Managed Fields:
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:acme:
.:
f:email:
f:privateKeySecretRef:
.:
f:name:
f:server:
f:solvers:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-09-12T07:26:05Z
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:acme:
.:
f:lastRegisteredEmail:
f:uri:
f:conditions:
Manager: controller
Operation: Update
Subresource: status
Time: 2022-09-12T07:26:06Z
Resource Version: 17749318
UID: fcbcbfff-b875-4ac4-805b-65ab0b4e1a93
Spec:
Acme:
Email: admin#fancycorp.com
Preferred Chain:
Private Key Secret Ref:
Name: letsencrypt-stage
Server: https://acme-staging-v02.api.letsencrypt.org/directory
Solvers:
http01:
Ingress:
Class: nginx
Status:
Acme:
Last Registered Email: admin#fancycorp.com
Uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/68184363
Conditions:
Last Transition Time: 2022-09-12T07:26:06Z
Message: The ACME account was registered with the ACME server
Observed Generation: 1
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Events: <none>
Certificate:
❯ kubectl describe cert/sslcertciam
Name: sslcertciam
Namespace: prod
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1
Kind: Certificate
Metadata:
Creation Timestamp: 2022-09-12T07:40:04Z
Generation: 1
Managed Fields:
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:ownerReferences:
.:
k:{"uid":"2a0af8f2-8166-4a8e-bb50-fd0aa906f844"}:
f:spec:
.:
f:dnsNames:
f:issuerRef:
.:
f:group:
f:kind:
f:name:
f:secretName:
f:usages:
Manager: controller
Operation: Update
Time: 2022-09-12T07:40:04Z
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:conditions:
f:notAfter:
f:notBefore:
f:renewalTime:
f:revision:
Manager: controller
Operation: Update
Subresource: status
Time: 2022-09-12T07:40:07Z
Owner References:
API Version: networking.k8s.io/v1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: forgerock
UID: 2a0af8f2-8166-4a8e-bb50-fd0aa906f844
Resource Version: 17753197
UID: 2484d1fe-5b80-4cbc-b2f8-7f4276e15a37
Spec:
Dns Names:
ciam.test.fancycorp.com
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-stage
Secret Name: sslcertciam
Usages:
digital signature
key encipherment
Status:
Conditions:
Last Transition Time: 2022-09-12T07:40:07Z
Message: Certificate is up to date and has not expired
Observed Generation: 1
Reason: Ready
Status: True
Type: Ready
Not After: 2022-12-11T06:40:05Z
Not Before: 2022-09-12T06:40:06Z
Renewal Time: 2022-11-11T06:40:05Z
Revision: 1
Events: <none>
Secret:
❯ kubectl describe secret/sslcertciam
Name: sslcertciam
Namespace: prod
Labels: <none>
Annotations: cert-manager.io/alt-names: ciam.test.fancycorp.com
cert-manager.io/certificate-name: sslcertciam
cert-manager.io/common-name: ciam.test.fancycorp.com
cert-manager.io/ip-sans:
cert-manager.io/issuer-group: cert-manager.io
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: letsencrypt-stage
cert-manager.io/uri-sans:
Type: kubernetes.io/tls
Data
====
tls.crt: 5741 bytes
tls.key: 1675 bytes
Certificate Request:
❯ kubectl describe certificaterequests/sslcertciam-p6qpg
Name: sslcertciam-p6qpg
Namespace: prod
Labels: <none>
Annotations: cert-manager.io/certificate-name: sslcertciam
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: sslcertciam-ztc8q
API Version: cert-manager.io/v1
Kind: CertificateRequest
Metadata:
Creation Timestamp: 2022-09-12T07:40:05Z
Generate Name: sslcertciam-
Generation: 1
Managed Fields:
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:cert-manager.io/certificate-name:
f:cert-manager.io/certificate-revision:
f:cert-manager.io/private-key-secret-name:
f:generateName:
f:ownerReferences:
.:
k:{"uid":"2484d1fe-5b80-4cbc-b2f8-7f4276e15a37"}:
f:spec:
.:
f:issuerRef:
.:
f:group:
f:kind:
f:name:
f:request:
f:usages:
Manager: controller
Operation: Update
Time: 2022-09-12T07:40:05Z
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:certificate:
f:conditions:
Manager: controller
Operation: Update
Subresource: status
Time: 2022-09-12T07:40:06Z
Owner References:
API Version: cert-manager.io/v1
Block Owner Deletion: true
Controller: true
Kind: Certificate
Name: sslcertciam
UID: 2484d1fe-5b80-4cbc-b2f8-7f4276e15a37
Resource Version: 17753174
UID: 2289de7b-f43f-4859-816b-b4a9794846ec
Spec:
Extra:
authentication.kubernetes.io/pod-name:
cert-manager-75947cd847-7gndz
authentication.kubernetes.io/pod-uid:
91415540-9113-4456-86d2-a0e28478718a
Groups:
system:serviceaccounts
system:serviceaccounts:cert-manager
system:authenticated
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-stage
Request: xxx
UID: 5be755b9-711c-49ac-a962-6b3a3f80d16e
Usages:
digital signature
key encipherment
Username: system:serviceaccount:cert-manager:cert-manager
Status:
Certificate: <base64-encoded-cert>
Conditions:
Last Transition Time: 2022-09-12T07:40:05Z
Message: Certificate request has been approved by cert-manager.io
Reason: cert-manager.io
Status: True
Type: Approved
Last Transition Time: 2022-09-12T07:40:06Z
Message: Certificate fetched from issuer successfully
Reason: Issued
Status: True
Type: Ready
Events: <none>
Curl:
❯ curl -v https://ciam.test.fancycorp.com/am/extlogin/ -k
* Trying xxx.xxx.xxx.xxx:443...
* Connected to ciam.test.fancycorp.com (xxx.xxx.xxx.xxx) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (OUT), TLS handshake, Client hello (1):
* (304) (IN), TLS handshake, Server hello (2):
* (304) (IN), TLS handshake, Unknown (8):
* (304) (IN), TLS handshake, Certificate (11):
* (304) (IN), TLS handshake, CERT verify (15):
* (304) (IN), TLS handshake, Finished (20):
* (304) (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / AEAD-AES256-GCM-SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* start date: Sep 12 07:43:15 2022 GMT
* expire date: Sep 12 07:43:15 2023 GMT
* issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x126811e00)
> GET /am/extlogin/ HTTP/2
> Host: ciam.test.fancycorp.com
> user-agent: curl/7.79.1
> accept: */*
...
Update 1:
When running kubectl ingress-nginx certs --host ciam.test.fancycorp.com, I am also getting the Fake Certificate returned.

Found the issue and solution...
There was another ingress defined in another namespace that did define the same hostname, but failed to link to a proper secret with the TLS cert. When I deleted that one, it immediately worked.
Lessons learned: Be aware of impacts from other namespaces!

Related

artifactory docker push error unknown: Method Not Allowed

we are using artifactory pro license and installed artifactory through helm on kubernetes.
when we create a docker local repo(The Repository Path Method) and push docker image,
we get 405 method not allowed errror. (docker login/ pull is working normally.)
########## error msg
# docker push art2.bee0dev.lge.com/docker-local/hello-world
e07ee1baac5f: Pushing [==================================================>] 14.85kB
unknown: Method Not Allowed
##########
we are using haproxy load balancer that is used for TLS, in front of Nginx Ingress Controller.
(nginx ingress controller's http nodeport is 31071)
please help us how can we solve the problem.
The artifactory and haproxy settings are as follows.
########## value.yaml
global:
joinKeySecretName: "artbee-stg-joinkey-secret"
masterKeySecretName: "artbee-stg-masterkey-secret"
storageClass: "sa-stg-netapp8300-bee-blk-nonretain"
ingress:
enabled: true
defaultBackend:
enabled: false
hosts: ["art2.bee0dev.lge.com"]
routerPath: /
artifactoryPath: /artifactory/
className: ""
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_pass_header Server;
proxy_set_header X-JFrog-Override-Base-Url https://art2.bee0dev.lge.com;
labels: {}
tls: []
additionalRules: []
## Artifactory license.
artifactory:
name: artifactory
replicaCount: 1
image:
registry: releases-docker.jfrog.io
repository: jfrog/artifactory-pro
# tag:
pullPolicy: IfNotPresent
labels: {}
updateStrategy:
type: RollingUpdate
migration:
enabled: false
customInitContainersBegin: |
- name: "init-mount-permission-setup"
image: "{{ .Values.initContainerImage }}"
imagePullPolicy: "{{ .Values.artifactory.image.pullPolicy }}"
securityContext:
runAsUser: 0
runAsGroup: 0
allowPrivilegeEscalation: false
capabilities:
drop:
- NET_RAW
command:
- 'bash'
- '-c'
- if [ $(ls -la /var/opt/jfrog | grep artifactory | awk -F' ' '{print $3$4}') == 'rootroot' ]; then
echo "mount permission=> root:root";
echo "change mount permission to 1030:1030 " {{ .Values.artifactory.persistence.mountPath }};
chown -R 1030:1030 {{ .Values.artifactory.persistence.mountPath }};
else
echo "already set. No change required.";
ls -la {{ .Values.artifactory.persistence.mountPath }};
fi
volumeMounts:
- mountPath: "{{ .Values.artifactory.persistence.mountPath }}"
name: artifactory-volume
database:
maxOpenConnections: 80
tomcat:
maintenanceConnector:
port: 8091
connector:
maxThreads: 200
sendReasonPhrase: false
extraConfig: 'acceptCount="100"'
customPersistentVolumeClaim: {}
license:
## licenseKey is the license key in plain text. Use either this or the license.secret setting
licenseKey: "???"
secret:
dataKey:
resources:
requests:
memory: "2Gi"
cpu: "1"
limits:
memory: "20Gi"
cpu: "8"
javaOpts:
xms: "1g"
xmx: "12g"
admin:
ip: "127.0.0.1"
username: "admin"
password: "!swiit123"
secret:
dataKey:
service:
name: artifactory
type: ClusterIP
loadBalancerSourceRanges: []
annotations: {}
persistence:
mountPath: "/var/opt/jfrog/artifactory"
enabled: true
accessMode: ReadWriteOnce
size: 100Gi
type: file-system
storageClassName: "sa-stg-netapp8300-bee-blk-nonretain"
nginx:
enabled: false
##########
########## haproxy config
frontend cto-stage-http-frontend
bind 10.185.60.75:80
bind 10.185.60.76:80
bind 10.185.60.201:80
bind 10.185.60.75:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
bind 10.185.60.76:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
bind 10.185.60.201:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
mode http
option forwardfor
option accept-invalid-http-request
acl k8s-cto-stage hdr_end(host) -i -f /etc/haproxy/web-ide/cto-stage
use_backend k8s-cto-stage-http if k8s-cto-stage
backend k8s-cto-stage-http
mode http
redirect scheme https if !{ ssl_fc }
option tcp-check
balance roundrobin
server lgestgbee04v 10.185.60.78:31071 check fall 3 rise 2
##########
The request doesn't seem to be landing at the correct endpoint. Please remove the semi-colon from the docker command and retry again.
docker push art2.bee0dev.lge.com;/docker-local/hello-world
Try executing it like below,
docker push art2.bee0dev.lge.com/docker-local/hello-world

SSL/TLS passthrough NGINX-Ingress-Controller on Openshift Not Working

I have deployed NGINX-Operator and NGINX-Ingress-Controller per the following github and the secrets from devopscube.
The current setup is:
AWS Classic LB -> ROSA Cluster [Helm NGINX-Ingress-Controller -> NGINX-Ingress -> Service -> Pod]
Here is the YAML file I used to create the NGINX-Ingress-Controller Resource. You will see that enableTLSPassthrough is set to true. However, I am unsure this is taking effect. My goal here is end to end TLS encryption from client to the NGINX service/pod. Right now I am met with error code 400 when accessing in browser through http (http works perfect fine in hello-world set up).
"400 Bad Request The plain HTTP request was sent to HTTPS port"
kind: NginxIngress
apiVersion: charts.nginx.org/v1alpha1
metadata:
name: nginxingress
namespace: nginx-ingress
spec:
controller:
affinity: {}
appprotect:
enable: false
appprotectdos:
debug: false
enable: false
maxDaemons: 0
maxWorkers: 0
memory: 0
config:
annotations: {}
entries: {}
customPorts: []
defaultTLS:
secret: nginx-ingress/default-server-secret
enableCertManager: false
enableCustomResources: true
enableExternalDNS: false
enableLatencyMetrics: false
enableOIDC: false
enablePreviewPolicies: false
enableSnippets: false
enableTLSPassthrough: true
extraContainers: []
globalConfiguration:
create: false
spec: {}
healthStatus: false
healthStatusURI: /nginx-health
hostNetwork: false
image:
pullPolicy: IfNotPresent
repository: nginx/nginx-ingress
tag: 2.3.0-ubi
ingressClass: nginx
initContainers: []
kind: deployment
logLevel: 1
nginxDebug: false
nginxReloadTimeout: 60000
nginxStatus:
allowCidrs: 127.0.0.1
enable: true
port: 8080
nginxplus: false
nodeSelector: {}
pod:
annotations: {}
extraLabels: {}
priorityClassName: null
readyStatus:
enable: true
port: 8081
replicaCount: 1
reportIngressStatus:
annotations: {}
enable: true
enableLeaderElection: true
ingressLink: ''
resources:
requests:
cpu: 100m
memory: 128Mi
service:
annotations: {}
create: true
customPorts: []
externalIPs: []
externalTrafficPolicy: Local
extraLabels: {}
httpPort:
enable: true
nodePort: ''
port: 80
targetPort: 80
httpsPort:
enable: true
nodePort: ''
port: 443
targetPort: 443
loadBalancerIP: ''
loadBalancerSourceRanges: []
type: LoadBalancer
serviceAccount:
imagePullSecretName: ''
setAsDefaultIngress: true
terminationGracePeriodSeconds: 30
tolerations: []
volumeMounts: []
volumes: []
watchNamespace: ''
wildcardTLS:
secret: null
nginxServiceMesh:
enable: false
enableEgress: false
prometheus:
create: true
port: 9113
scheme: http
secret: ''
rbac:
create: true
Taking a look at the NGINX-Ingress-Controller pod logs on creation I can see nothing about TLS being enabled. A flag does get set in the args section once the pod deploys but I am still unsure this is working.
W0802 20:33:26.594545 1 flags.go:273] Ignoring unhandled arguments: []
I0802 20:33:26.594683 1 flags.go:190] Starting NGINX Ingress Controller Version=2.3.0 PlusFlag=false
I0802 20:33:26.594689 1 flags.go:191] Commit=979db22d8065b22fedb410c9b9c5875cf0a6dc66 Date=2022-07-12T08:51:24Z DirtyState=false Arch=linux/amd64 Go=go1.18.3
I0802 20:33:26.601340 1 main.go:210] Kubernetes version: 1.22.0
I0802 20:33:26.606551 1 main.go:326] Using nginx version: nginx/1.23.0
2022/08/02 20:33:26 [notice] 13#13: using the "epoll" event method
2022/08/02 20:33:26 [notice] 13#13: nginx/1.23.0
2022/08/02 20:33:26 [notice] 13#13: built by gcc 8.5.0 20210514 (Red Hat 8.5.0-4) (GCC)
2022/08/02 20:33:26 [notice] 13#13: OS: Linux 4.18.0-305.19.1.el8_4.x86_64
2022/08/02 20:33:26 [notice] 13#13: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/08/02 20:33:26 [notice] 13#13: start worker processes
2022/08/02 20:33:26 [notice] 13#13: start worker process 15
2022/08/02 20:33:26 [notice] 13#13: start worker process 16
2022/08/02 20:33:26 [notice] 13#13: start worker process 17
2022/08/02 20:33:26 [notice] 13#13: start worker process 18
I0802 20:33:26.630298 1 listener.go:54] Starting Prometheus listener on: :9113/metrics
I0802 20:33:26.630860 1 leaderelection.go:248] attempting to acquire leader lease nginx-ingress/nginxingress-nginx-ingress-leader-election...
I0802 20:33:26.639466 1 leaderelection.go:258] successfully acquired lease nginx-ingress/nginxingress-nginx-ingress-leader-election
Here is the Ingress Resource YAML
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
# kubernetes.io/ingress.class: addon-http-application-routing
# nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
# nginx.ingress.kubernetes.io/ssl-redirect: "true"
# nginx.ingress.kubernetes.io/proxy-redirect-from: https
# nginx.ingress.kubernetes.io/proxy-redirect-to: https
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# nginx.ingress.kubernetes.io/proxy-ssl-protocols: "HTTPS"
# nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
defaultBackend:
service:
name: nginx
port:
number: 443
ingressClassName: nginx
tls:
- hosts:
- nginx-tlssni.apps.clustername.openshiftapps.com
secretName: nginx-tls
rules:
- host: "nginx-tlssni.apps.clustername.openshiftapps.com"
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: nginx
port:
number: 443
Thank you for your insight :)
there are many kinds of NGINX based ingress controllers. The two that are most easily confused are the NGINX INC ingress controller, and the CNCF Kubernetes Ingress Controller.
My understanding is that nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" is for the CNCF Kubernetes Ingress Controller.
Now to your question - based on this example, try changing your annotations to the following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
nginx.org/ssl-services: "nginx" # Name of your k8s service with TLS
...

kubernetes - nginx ingress controller - upstream timed out (110: Operation timed out) while connecting to upstream from another namespace

I am trying to fix this issue:
2021/11/24 14:28:46 [error] 610#610: *64890 upstream timed out (110:Operation timed out) while connecting to upstream, client: 172.31.30.204, server: _, request: "GET /user/list HTTP/1.1", upstream: "http://10.111.78.149:8080/list", host: "3.142.236.87:30080"
I did tried to get inside the nginx-ingress-controller pod and execute this command:
curl 10.111.78.149:8080/list
10.111.78.149 - usertest service CLUSTER-IP
Received this error:
curl: (28) Failed to connect to 10.111.78.149 port 8080 after 131181 ms: Operation timed out
On the other namespace (demo), I also did try to use the said command inside the pod:
curl 10.111.78.149:8080/list
Had the expected response:
{"message":"User list","users":[{"name":"Jon Snow"},{"name":"Frodo Baggins"},{"name":"Bilbo Baggins"}]}
The service usertest service belong in the demo namespace
Am I missing something here?
Edit here is the service.yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: user-app-nginx-ingress
meta.helm.sh/release-namespace: default
creationTimestamp: "2021-11-24T14:27:39Z"
labels:
app.kubernetes.io/instance: user-app-nginx-ingress
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: nginx-ingress
app.kubernetes.io/version: 1.16.0
helm.sh/chart: nginx-ingress-0.1.0
name: usertest
namespace: demo
resourceVersion: "1553814"
uid: 266b7fc7-6fdb-455b-acfd-e4ba110460ec
spec:
clusterIP: 10.111.78.149
clusterIPs:
- 10.111.78.149
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/instance: user-app-nginx-ingress
app.kubernetes.io/name: nginx-ingress
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}

Setting up Ingress (Kubernetes)

I want to set up an Ingress, which routes traffic to my underlying Services. Unfortunately, I get an error when I deploy my ingress-controller-deployment.yaml and I don't know why... The pod with the ingress-controller crashes immediately, with the error message "CrashLoopBackOff".
With my understanding the Ingress-Control has to be deployed in a Pod and this pod can be accessed through the ingress-svc. The ingress-svc seems to work, but the Pod crashes. After the ingress-controller works I need an additional file that defines the routes and everything. But I don't see the point of continuing with out a working and deployable ingress-controller.
Pod description:
Name: ingress-controller-7749c785f-x94ll
Namespace: ingress
Node: gke-cluster-1-default-pool-8484e77d-r4wp/10.128.0.2
Start Time: Thu, 26 Apr 2018 14:25:04 +0200
Labels: k8s-app=nginx-ingress-lb
pod-template-hash=330573419
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"ingress","name":"ingress-controller-7749c785f","uid":"d8ff0a6d-494c-11e8-a840
-420...
Status: Running
IP: 10.8.0.14
Created By: ReplicaSet/ingress-controller-7749c785f
Controlled By: ReplicaSet/ingress-controller-7749c785f
Containers:
nginx-ingress-controller:
Container ID: docker://5654c7dffc44510132cba303d66ee570280f2cec235e4d4fa6ef8ad543e0c91d
Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0
Image ID: docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller#sha256:39cc6ce23e5bcdf8aa78bc28bbcfe0999e449bf99fe2e8d60984b417facc5cd4
Ports: 80/TCP, 443/TCP
Args:
/nginx-ingress-controller
--admin-backend-svc=$(POD_NAMESPACE)/admin-backend
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Thu, 26 Apr 2018 14:26:57 +0200
Finished: Thu, 26 Apr 2018 14:26:57 +0200
Ready: False
Restart Count: 4
Liveness: http-get http://:10254/healthz delay=10s timeout=5s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-controller-7749c785f-x94ll (v1:metadata.name)
POD_NAMESPACE: ingress (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-plbss (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-plbss:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-plbss
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Ingress-controller-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ingress-controller
spec:
replicas: 1
revisionHistoryLimit: 3
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
spec:
containers:
- args:
- /nginx-ingress-controller
- "--admin-backend-svc=$(POD_NAMESPACE)/admin-backend"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0"
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 5
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: ingress-svc
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
selector:
k8s-app: nginx-ingress-lb
The issue is the args. The args on one of mine are
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
I had also created the config maps for configuration, tcp and udp.

kibana server.basePath results in 404

I am running kibana 4.4.1 on RHEL 7.2
Everything works when the kibana.yml file does not contain the setting server.basePath. Kibana successfully starts and spits out the message
[info][listening] Server running at http://x.x.x.x:5601/
curl http://x.x.x.x:5601/app/kibana returns the expected HTML.
However, when basePath is set to server.basePath: "/kibana4", http://x.x.x.x:5601/kibana4/app/kibana results in a 404. Why?
The server successfully starts with the same logging
[info][listening] Server running at http://x.x.x.x:5601/
but
curl http://x.x.x.x:5601/ returns
<script>
var hashRoute = '/kibana4/app/kibana';
var defaultRoute = '/kibana4/app/kibana';
...
</script>
curl http://x.x.x.x:5601/kibana4/app/kibana returns
{"statusCode":404,"error":"Not Found"}
Why does '/kibana4/app/kibana' return a 404?
server.basePath does not behave as I expected.
I was expecting server.basePath to symmetrically affect the URL. Meaning that request URLs would be under the subdomain /kibana4 and response URLs would also be under the subdomain /kibana4.
This is not the case. server.basePath asymetrically affects the URL. Meaning that all request URLs remain the same but response URLs have included the subdomin. For example, the kibana home page is still accessed at http://x.x.x.x:5601/app/kibana but all hrefs URLs include the subdomain /kibana4.
server.basePath only works if you use a proxy that removes the subdomain before forwarding requests to kibana
Below is the HAProxy configuration that I used
frontend main *:80
acl url_kibana path_beg -i /kibana4
use_backend kibana if url_kibana
backend kibana
mode http
reqrep ^([^\ ]*)\ /kibana4[/]?(.*) \1\ /\2\
server x.x.x.x:5601
The important bit is the reqrep expression that removes the subdomain /kibana4 from the URL before forwarding the request to kibana.
Also, after changing server.basePath, you may need to modify the nginx conf to rewrite the request, otherwise it won't work. Below is the one works for me
location /kibana/ {
proxy_pass http://<kibana IP>:5601/; # Ensure the trailing slash is in place!
proxy_buffering off;
#proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
#access_log off;
}
The below config files worked for me in the k8s cluster for efk setup.
Elastisearch Statefulset: elasticsearch-logging-statefulset.yaml
# elasticsearch-logging-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
Kibana Deployment: kibana-logging-deployment.yaml
# kibana-logging-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
volumeMounts:
- mountPath: "/usr/share/kibana/config/kibana.yml"
subPath: "kibana.yml"
name: kibana-config
volumes:
- name: kibana-config
configMap:
name: kibana-config
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
kibana.yml file
# kibana.yml
server.name: kibana
server.host: "0"
server.port: "5601"
server.basePath: "/kibana"
server.rewriteBasePath: true
Nginx kibana-ingress: kibana-ingress-ssl.yaml
# kibana-ingress-ssl.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - admin'
nginx.ingress.kubernetes.io/proxy-body-size: 100m
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- example.com
# # This assumes tls-secret exists adn the SSL
# # certificate contains a CN for example.com
secretName: tls-secret
rules:
- host: example.com
http:
paths:
- backend:
service:
name: logs-kibana
port:
number: 5601
path: /kibana
pathType: Prefix
auth file
admin:$apr1$C5ZR2fin$P8.394Xor4AZkYKAgKi0I0
fluentd-service-account: fluentd-sa-rb-cr.yaml
# fluentd-sa-rb-cr.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
Fluentd-Daemonset: fluentd-daemonset.yaml
# fluentd-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
- name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
# - name: varlibdockercontainers
# mountPath: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
mountPath: /var/log/pods
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
# - name: varlibdockercontainers
# hostPath:
# path: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
hostPath:
path: /var/log/pods
Deployment Steps.
apt install apache2-utils -y
# It will prompt for a password, pass a password.
htpasswd -c auth admin
kubectl create secret generic basic-auth --from-file=auth
kubectl create ns logging
kubectl apply -f elasticsearch-logging-statefulset.yaml
kubectl create configmap kibana-config --from-file=kibana.yml
kubectl apply -f kibana-logging-deployment.yaml
kubectl apply -f kibana-ingress-ssl.yaml
kubectl apply -f fluentd/fluentd-sa-rb-cr.yaml
kubectl apply -f fluentd/fluentd-daemonset.yaml

Resources