From Kubernetes v1.6, RBAC authorize feature is enabled by default. This implies that the deployments/configurations I had for v1.5, are no longer working.
One of the key components to which I needed to grant access is to nginx, otherwise a message like to following can be seen on the logs
F0425 15:08:07.246596 1 main.go:116] no service with name kube-system/default-http-backend found: the server does not allow access to the requested resource (get services default-http-backend)
UPDATED: kubernetes/nginx has the documentation updated here and for RBAC details, here
OLD:
In order to support RBAC, we need two things:
define the servciceAccount/ClusterRole/ClusterRoleBindings
set a serviceAccount for the nginx deployment
Here are the files I use to set it up:
nginx-roles.yml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: nginx-role
rules:
- apiGroups: [""]
resources: ["secrets", "configmaps", "services", "endpoints"]
verbs:
- get
- watch
- list
- proxy
- use
- redirect
- apiGroups: [""]
resources: ["events"]
verbs:
- redirect
- patch
- post
- apiGroups:
- "extensions"
resources:
- "ingresses"
verbs:
- get
- watch
- list
- proxy
- use
- redirect
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: nginx-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-role
subjects:
- kind: ServiceAccount
name: nginx
namespace: kube-system
nginx-ingress-controller.yml
with nodeSelector: kubecluster-amd-1 and default-http-backend used
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-controller
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: nginx-ingress-controller
spec:
serviceAccount: nginx
hostNetwork: true
nodeSelector:
kubernetes.io/hostname: kubecluster-amd-1
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.4
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 20
timeoutSeconds: 1
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
- containerPort: 5683
hostPort: 5683
protocol: UDP
- containerPort: 5684
hostPort: 5684
protocol: UDP
- containerPort: 53
hostPort: 53
protocol: UDP
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
Related
I have my A record on Netlify mapped to my Load Balancer IP Address on Digital Ocean, and it's able to hit the nginx server, but I'm getting a 404 when trying to access any of the apps APIs. I noticed that the status of my Ingress doesn't show that it is bound to the Load Balancer.
Does anybody know what I am missing to get this setup?
Application Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: d2d-server
spec:
rules:
- host: api.cloud.myhostname.com
http:
paths:
- backend:
service:
name: d2d-server
port:
number: 443
path: /
pathType: ImplementationSpecific
Application Service:
apiVersion: v1
kind: Service
metadata:
name: d2d-server
spec:
selector:
app: d2d-server
ports:
- name: http-api
protocol: TCP
port: 443
targetPort: 8080
type: ClusterIP
Ingress Controller:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
uid: fc64d9f6-a935-49b2-9d7a-b862f660a4ea
resourceVersion: '257931'
generation: 1
creationTimestamp: '2021-10-22T05:31:26Z'
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 1.0.4
helm.sh/chart: ingress-nginx-4.0.6
annotations:
deployment.kubernetes.io/revision: '1'
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
spec:
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
defaultMode: 420
containers:
- name: controller
image: >-
k8s.gcr.io/ingress-nginx/controller:v1.0.4#sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef
args:
- /nginx-ingress-controller
- '--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller'
- '--election-id=ingress-controller-leader'
- '--controller-class=k8s.io/ingress-nginx'
- '--configmap=$(POD_NAMESPACE)/ingress-nginx-controller'
- '--validating-webhook=:8443'
- '--validating-webhook-certificate=/usr/local/certificates/cert'
- '--validating-webhook-key=/usr/local/certificates/key'
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
resources:
requests:
cpu: 100m
memory: 90Mi
volumeMounts:
- name: webhook-cert
readOnly: true
mountPath: /usr/local/certificates/
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
allowPrivilegeEscalation: true
restartPolicy: Always
terminationGracePeriodSeconds: 300
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
serviceAccount: ingress-nginx
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
Load Balancer:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 1.0.4
helm.sh/chart: ingress-nginx-4.0.6
annotations:
kubernetes.digitalocean.com/load-balancer-id: <LB_ID>
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: 'true'
service.beta.kubernetes.io/do-loadbalancer-name: ingress-nginx
service.beta.kubernetes.io/do-loadbalancer-protocol: https
status:
loadBalancer:
ingress:
- ip: <IP_HIDDEN>
spec:
ports:
- name: http
protocol: TCP
appProtocol: http
port: 80
targetPort: http
nodePort: 31661
- name: https
protocol: TCP
appProtocol: https
port: 443
targetPort: https
nodePort: 32761
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
clusterIP: <IP_HIDDEN>
clusterIPs:
- <IP_HIDDEN>
type: LoadBalancer
sessionAffinity: None
externalTrafficPolicy: Local
healthCheckNodePort: 30477
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
I just needed to add the field ingressClassName of nginx to the ingress spec.
The kind: ingress are proxy rules about managing traffic from the Ingress Controller to the incluster services. But to achive this, outside traffic needs to reach Ingress Controller.
https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress
lets assume that "client" is our Loadbalancer
So what I assume you want to do is to point your LoadBalancer to the Ingress Controler and then, based on you Ingress rules, it will route traffic to you ( in this case ) d2d service.
To point a LB to a pod, you need to create a Service resource with spec.type: Loadbalancer field. I modify an example from digital ocean that should match your needs. Notice the annotation of the Service that can modify the Loadbalancer params, more on this you can find here: https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/annotations.md
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
annotations:
# #Edit set it to http since no certyficate is provided
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-name: "<YOUR_LB_NAME>"
spec:
type: LoadBalancer
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 1.0.4
helm.sh/chart: ingress-nginx-4.0.6
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
- name: webhook
protocol: TCP
port: 8443
targetPort: 8443
I'm trying to run a minimalistic sample of oauth2-proxy with Keycloak. I used oauth2-proxy's k8s example, which uses dex, to build up my keycloak example.
The problem is that I don't seem to get the proxy to work:
# kubectl get pods
NAME READY STATUS RESTARTS AGE
httpbin-774999875d-zbczh 1/1 Running 0 2m49s
keycloak-758d7c758-27pgh 1/1 Running 0 2m49s
oauth2-proxy-5875dd67db-8qwqn 0/1 CrashLoopBackOff 2 2m49s
Logs indicate a network error:
# kubectl logs oauth2-proxy-5875dd67db-8qwqn
[2021/09/22 08:14:56] [main.go:54] Get "http://keycloak.localtest.me/auth/realms/master/.well-known/openid-configuration": dial tcp 127.0.0.1:80: connect: connection refused
I believe I have set up the ingress correctly, though.
Steps to reproduce
Set up the cluster:
#Creare kind cluster
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/kubernetes/kind-cluster.yaml
kind create cluster --name oauth2-proxy --config kind-cluster.yaml
#Setup dns
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/kubernetes/custom-dns.yaml
kubectl apply -f custom-dns.yaml
kubectl -n kube-system rollout restart deployment/coredns
kubectl -n kube-system rollout status --timeout 5m deployment/coredns
#Setup ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
kubectl --namespace ingress-nginx rollout status --timeout 5m deployment/ingress-nginx-controller
#Deploy
#import keycloak master realm
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/keycloak/master-realm.json
kubectl create configmap keycloak-import-config --from-file=master-realm.json=master-realm.json
Deploy the test application. My deployment.yaml file:
###############oauth2-proxy#############
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
name: oauth2-proxy
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
name: oauth2-proxy
template:
metadata:
labels:
name: oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --oidc-issuer-url=http://keycloak.localtest.me/auth/realms/master
- --upstream="file://dev/null"
- --client-id=oauth2-proxy
- --client-secret=72341b6d-7065-4518-a0e4-50ee15025608
- --cookie-secret=x-1vrrMhC-886ITuz8ySNw==
- --email-domain=*
- --scope=openid profile email users
- --cookie-domain=.localtest.me
- --whitelist-domain=.localtest.me
- --pass-authorization-header=true
- --pass-access-token=true
- --pass-user-headers=true
- --set-authorization-header=true
- --set-xauthrequest=true
- --cookie-refresh=1m
- --cookie-expire=30m
- --http-address=0.0.0.0:4180
image: quay.io/oauth2-proxy/oauth2-proxy:latest
# image: "quay.io/pusher/oauth2_proxy:v5.1.0"
name: oauth2-proxy
ports:
- containerPort: 4180
name: http
protocol: TCP
livenessProbe:
httpGet:
path: /ping
port: http
scheme: HTTP
initialDelaySeconds: 0
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /ping
port: http
scheme: HTTP
initialDelaySeconds: 0
timeoutSeconds: 1
successThreshold: 1
periodSeconds: 10
resources:
{}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
type: ClusterIP
ports:
- port: 4180
targetPort: 4180
name: http
selector:
name: oauth2-proxy
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
large_client_header_buffers 4 32k;
spec:
rules:
- host: oauth2-proxy.localtest.me
http:
paths:
- path: /
backend:
serviceName: oauth2-proxy
servicePort: 4180
---
# ######################httpbin##################
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
spec:
replicas: 1
selector:
matchLabels:
name: httpbin
template:
metadata:
labels:
name: httpbin
spec:
containers:
- image: kennethreitz/httpbin:latest
name: httpbin
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
hostname: httpbin
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: httpbin-svc
labels:
app: httpbin
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
name: httpbin
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: httpbin
labels:
name: httpbin
annotations:
nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-User,X-Auth-Request-Email
nginx.ingress.kubernetes.io/auth-signin: http://oauth2-proxy.localtest.me/oauth2/start
nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy.localtest.me/oauth2/auth
spec:
rules:
- host: httpbin.localtest.me
http:
paths:
- path: /
backend:
serviceName: httpbin-svc
servicePort: 80
---
# ######################keycloak#############
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: keycloak
name: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- args:
- -Dkeycloak.migration.action=import
- -Dkeycloak.migration.provider=singleFile
- -Dkeycloak.migration.file=/etc/keycloak_import/master-realm.json
- -Dkeycloak.migration.strategy=IGNORE_EXISTING
env:
- name: KEYCLOAK_PASSWORD
value: password
- name: KEYCLOAK_USER
value: admin#example.com
- name: KEYCLOAK_HOSTNAME
value: keycloak.localtest.me
- name: PROXY_ADDRESS_FORWARDING
value: "true"
image: quay.io/keycloak/keycloak:15.0.2
# image: jboss/keycloak:10.0.0
name: keycloak
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumeMounts:
- mountPath: /etc/keycloak_import
name: keycloak-config
hostname: keycloak
volumes:
- configMap:
defaultMode: 420
name: keycloak-import-config
name: keycloak-config
---
apiVersion: v1
kind: Service
metadata:
name: keycloak-svc
labels:
app: keycloak
spec:
type: ClusterIP
sessionAffinity: None
ports:
- name: http
targetPort: http
port: 8080
selector:
app: keycloak
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: keycloak
spec:
tls:
- hosts:
- "keycloak.localtest.me"
rules:
- host: "keycloak.localtest.me"
http:
paths:
- path: /
backend:
serviceName: keycloak-svc
servicePort: 8080
---
# kubectl apply -f deployment.yaml
Configure /etc/hosts on the development machine file to include localtest.me domain:
127.0.0.1 oauth2-proxy.localtest.me
127.0.0.1 keycloak.localtest.me
127.0.0.1 httpbin.localtest.me
127.0.0.1 localhost
Note that I can reach http://keycloak.localtest.me/auth/realms/master/.well-known/openid-configuration with no problem from my host browser. It appears that the oauth2-proxy's pod cannot reach the service via the ingress. Would really appreciate any sort of help here.
Turned out that I needed to add keycloak to custom-dns.yaml.
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
hosts {
10.244.0.1 dex.localtest.me. # <----Configured for dex
10.244.0.1 oauth2-proxy.localtest.me
fallthrough
}
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
Added keycloak showed as below:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
hosts {
10.244.0.1 keycloak.localtest.me
10.244.0.1 oauth2-proxy.localtest.me
fallthrough
}
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
Machine: Ubuntu 18.06 running on a VPS (technically a server). The cluster is set up with kubeadm.
Problem: I am not able to hit the controller via domain.com/
So, basically I simply executed these two ymls:
kubectl apply -f
https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
kubectl apply -f
https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
The nginx controller is up and running and it takes care about the
other scripts.
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-766c77b7d4-8sbrh 1/1 Running 0 46m
The logs of the ingress controller loading ingresses:
I1225 11:39:43.663283 9 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"articleservice-ingress", UID:"c5d24d09-0839-11e9-a12a-0050563e015b", APIVersion:"extensions/v1beta1", ResourceVersion:"117205", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/articleservice-ingress
I1225 11:39:43.663499 9 controller.go:172] Configuration changes detected, backend reload required.
I1225 11:39:43.893031 9 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"cartservice-ingress", UID:"c5f6051e-0839-11e9-a12a-0050563e015b", APIVersion:"extensions/v1beta1", ResourceVersion:"117208", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/cartservice-ingress
I1225 11:39:43.902002 9 controller.go:190] Backend successfully reloaded.
[25/Dec/2018:11:39:43 +0000]TCP200000.000
I1225 11:39:44.169490 9 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"catalogservice-ingress", UID:"c62008a1-0839-11e9-a12a-0050563e015b", APIVersion:"extensions/v1beta1", ResourceVersion:"117211", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/catalogservice-ingress
I1225 11:39:46.634113 9 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"customerservice-ingress", UID:"c7984c98-0839-11e9-a12a-0050563e015b", APIVersion:"extensions/v1beta1", ResourceVersion:"117215", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/customerservice-ingress
I1225 11:39:46.997363 9 controller.go:172] Configuration changes detected, backend reload required.
[25/Dec/2018:11:39:47 +0000]TCP200000.000
I1225 11:39:47.242642 9 controller.go:190] Backend successfully reloaded.
Now I expect I am able to access the controller via domain.com/ (should return 404) the other ingresses which are registered via domain.com/ingress
I guess I missed something very basic. If you need any further information just let me know.
Output of kubectl -n ingress-nginx describe service/ingress-nginx
Name: ingress-nginx
Namespace: ingress-nginx
Labels: app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/par...
Selector: app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
Type: NodePort
IP: 10.100.48.223
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30734/TCP
Endpoints: 192.168.0.8:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32609/TCP
Endpoints: 192.168.0.8:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
All credits to #Konstantin Vustin and #rom. I noticed I can access the ingress controller via the NodePort 30734.
Consider using MetalLB.
This way you can use standard ports and predefined set of IP addresses to access your resources. It also allows you to create Services types of LoadBalancer with issued ExternalIP on bare metal installation of Kubernetes cluster.
I want to set up an Ingress, which routes traffic to my underlying Services. Unfortunately, I get an error when I deploy my ingress-controller-deployment.yaml and I don't know why... The pod with the ingress-controller crashes immediately, with the error message "CrashLoopBackOff".
With my understanding the Ingress-Control has to be deployed in a Pod and this pod can be accessed through the ingress-svc. The ingress-svc seems to work, but the Pod crashes. After the ingress-controller works I need an additional file that defines the routes and everything. But I don't see the point of continuing with out a working and deployable ingress-controller.
Pod description:
Name: ingress-controller-7749c785f-x94ll
Namespace: ingress
Node: gke-cluster-1-default-pool-8484e77d-r4wp/10.128.0.2
Start Time: Thu, 26 Apr 2018 14:25:04 +0200
Labels: k8s-app=nginx-ingress-lb
pod-template-hash=330573419
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"ingress","name":"ingress-controller-7749c785f","uid":"d8ff0a6d-494c-11e8-a840
-420...
Status: Running
IP: 10.8.0.14
Created By: ReplicaSet/ingress-controller-7749c785f
Controlled By: ReplicaSet/ingress-controller-7749c785f
Containers:
nginx-ingress-controller:
Container ID: docker://5654c7dffc44510132cba303d66ee570280f2cec235e4d4fa6ef8ad543e0c91d
Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0
Image ID: docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller#sha256:39cc6ce23e5bcdf8aa78bc28bbcfe0999e449bf99fe2e8d60984b417facc5cd4
Ports: 80/TCP, 443/TCP
Args:
/nginx-ingress-controller
--admin-backend-svc=$(POD_NAMESPACE)/admin-backend
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Thu, 26 Apr 2018 14:26:57 +0200
Finished: Thu, 26 Apr 2018 14:26:57 +0200
Ready: False
Restart Count: 4
Liveness: http-get http://:10254/healthz delay=10s timeout=5s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-controller-7749c785f-x94ll (v1:metadata.name)
POD_NAMESPACE: ingress (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-plbss (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-plbss:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-plbss
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Ingress-controller-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ingress-controller
spec:
replicas: 1
revisionHistoryLimit: 3
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
spec:
containers:
- args:
- /nginx-ingress-controller
- "--admin-backend-svc=$(POD_NAMESPACE)/admin-backend"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0"
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 5
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: ingress-svc
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
selector:
k8s-app: nginx-ingress-lb
The issue is the args. The args on one of mine are
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
I had also created the config maps for configuration, tcp and udp.
I am running kibana 4.4.1 on RHEL 7.2
Everything works when the kibana.yml file does not contain the setting server.basePath. Kibana successfully starts and spits out the message
[info][listening] Server running at http://x.x.x.x:5601/
curl http://x.x.x.x:5601/app/kibana returns the expected HTML.
However, when basePath is set to server.basePath: "/kibana4", http://x.x.x.x:5601/kibana4/app/kibana results in a 404. Why?
The server successfully starts with the same logging
[info][listening] Server running at http://x.x.x.x:5601/
but
curl http://x.x.x.x:5601/ returns
<script>
var hashRoute = '/kibana4/app/kibana';
var defaultRoute = '/kibana4/app/kibana';
...
</script>
curl http://x.x.x.x:5601/kibana4/app/kibana returns
{"statusCode":404,"error":"Not Found"}
Why does '/kibana4/app/kibana' return a 404?
server.basePath does not behave as I expected.
I was expecting server.basePath to symmetrically affect the URL. Meaning that request URLs would be under the subdomain /kibana4 and response URLs would also be under the subdomain /kibana4.
This is not the case. server.basePath asymetrically affects the URL. Meaning that all request URLs remain the same but response URLs have included the subdomin. For example, the kibana home page is still accessed at http://x.x.x.x:5601/app/kibana but all hrefs URLs include the subdomain /kibana4.
server.basePath only works if you use a proxy that removes the subdomain before forwarding requests to kibana
Below is the HAProxy configuration that I used
frontend main *:80
acl url_kibana path_beg -i /kibana4
use_backend kibana if url_kibana
backend kibana
mode http
reqrep ^([^\ ]*)\ /kibana4[/]?(.*) \1\ /\2\
server x.x.x.x:5601
The important bit is the reqrep expression that removes the subdomain /kibana4 from the URL before forwarding the request to kibana.
Also, after changing server.basePath, you may need to modify the nginx conf to rewrite the request, otherwise it won't work. Below is the one works for me
location /kibana/ {
proxy_pass http://<kibana IP>:5601/; # Ensure the trailing slash is in place!
proxy_buffering off;
#proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
#access_log off;
}
The below config files worked for me in the k8s cluster for efk setup.
Elastisearch Statefulset: elasticsearch-logging-statefulset.yaml
# elasticsearch-logging-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
Kibana Deployment: kibana-logging-deployment.yaml
# kibana-logging-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
volumeMounts:
- mountPath: "/usr/share/kibana/config/kibana.yml"
subPath: "kibana.yml"
name: kibana-config
volumes:
- name: kibana-config
configMap:
name: kibana-config
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
kibana.yml file
# kibana.yml
server.name: kibana
server.host: "0"
server.port: "5601"
server.basePath: "/kibana"
server.rewriteBasePath: true
Nginx kibana-ingress: kibana-ingress-ssl.yaml
# kibana-ingress-ssl.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - admin'
nginx.ingress.kubernetes.io/proxy-body-size: 100m
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- example.com
# # This assumes tls-secret exists adn the SSL
# # certificate contains a CN for example.com
secretName: tls-secret
rules:
- host: example.com
http:
paths:
- backend:
service:
name: logs-kibana
port:
number: 5601
path: /kibana
pathType: Prefix
auth file
admin:$apr1$C5ZR2fin$P8.394Xor4AZkYKAgKi0I0
fluentd-service-account: fluentd-sa-rb-cr.yaml
# fluentd-sa-rb-cr.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
Fluentd-Daemonset: fluentd-daemonset.yaml
# fluentd-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
- name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
# - name: varlibdockercontainers
# mountPath: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
mountPath: /var/log/pods
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
# - name: varlibdockercontainers
# hostPath:
# path: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
hostPath:
path: /var/log/pods
Deployment Steps.
apt install apache2-utils -y
# It will prompt for a password, pass a password.
htpasswd -c auth admin
kubectl create secret generic basic-auth --from-file=auth
kubectl create ns logging
kubectl apply -f elasticsearch-logging-statefulset.yaml
kubectl create configmap kibana-config --from-file=kibana.yml
kubectl apply -f kibana-logging-deployment.yaml
kubectl apply -f kibana-ingress-ssl.yaml
kubectl apply -f fluentd/fluentd-sa-rb-cr.yaml
kubectl apply -f fluentd/fluentd-daemonset.yaml