I have 2 websites. Each of the site has their corresponding subdomain. To achieve this in AKS, I'm using nginx-ingress to route the traffic according to incoming subdomain.
testing-web1 - http://web1.testing.com/
testing-web2 - http://web2.testing.com/
testing-web1 - by IP address (for testing purpose)
When I enter http://web1.testing.com/, nginx-ingress is routing me to testing-web1. If I enter http://web2.testing.com/, nginx-ingress will route me to testing-web1 instead of testing-web2.
Is this an expected behavior? Did I miss-configure something? I think I'm almost there, but I couldn't figure out what went wrong.
Thanks.
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-web1
spec:
replicas: 1
selector:
matchLabels:
app: testing-web1
minReadySeconds: 5
template:
metadata:
labels:
app: testing-web1
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: testing-web1
image: image-web1:1.0
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-web2
spec:
replicas: 1
selector:
matchLabels:
app: testing-web2
minReadySeconds: 5
template:
metadata:
labels:
app: testing-web2
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: testing-web2
image: image-web2:1.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: testing-web1
labels:
app: testing-web1
spec:
ports:
- port: 80
selector:
app: testing-web1
---
apiVersion: v1
kind: Service
metadata:
name: testing-web2
labels:
app: testing-web2
spec:
ports:
- port: 80
selector:
app: testing-web2
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: webapp-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- web1.testing.com
- web2.testing.com
rules:
- host: web1.testing.com
http:
paths:
- path: /
backend:
serviceName: testing-web1
servicePort: 80
- host: web2.testing.com
http:
paths:
- path: /
backend:
serviceName: testing-web2
servicePort: 80
- http:
paths:
- backend:
serviceName: testing-web1
servicePort: 80
path: /
Related
Hopefully a simple one to answer.
I've deployed a test Nginx Deployment, Service, and Ingress with the agent sidecar annotation, but it's not appearing in Jaeger-Query.
I've followed this section of the docs: https://www.jaegertracing.io/docs/1.37/operator/#auto-injecting-jaeger-agent-sidecars
My Nginx .yaml file is configured as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jaeger-nginx-test-deployment
namespace: observability
annotations:
sidecar.jaegertracing.io/inject: "true"
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: jaeger-nginx-test-deployment
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: jaeger-nginx-test
namespace: observability
labels:
app: nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
app: nginx
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jaeger-nginx-test-ingress
namespace: observability
annotations:
kubernetes.io/ingress.class: "gce"
spec:
rules:
- http:
paths:
- path: "/*"
pathType: ImplementationSpecific
backend:
service:
name: jaeger-nginx-test
port:
number: 80
Could someone please advise how we can get this to appear in the Jaeger-Query UI?
At the moment it only recognises the 'jaeger-query' service.
The below example worked for me after running:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml
kubectl create -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.37.0/jaeger-operator.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jaeger-test-deployment
namespace: observability
annotations:
sidecar.jaegertracing.io/inject: "true"
spec:
selector:
matchLabels:
app: test-deployment
replicas: 1
template:
metadata:
labels:
app: test-deployment
spec:
containers:
- name: jaeger-test-deployment
image: jaegertracing/example-hotrod:1.28
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: jaeger-test-deployment
namespace: observability
labels:
app: test-deployment
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
app: test-deployment
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jaeger-test-ingress
namespace: observability
annotations:
kubernetes.io/ingress.class: "gce"
spec:
rules:
- http:
paths:
- path: "/*"
pathType: ImplementationSpecific
backend:
service:
name: jaeger-test-deployment
port:
number: 80
I'm trying to run a minimalistic sample of oauth2-proxy with Keycloak. I used oauth2-proxy's k8s example, which uses dex, to build up my keycloak example.
The problem is that I don't seem to get the proxy to work:
# kubectl get pods
NAME READY STATUS RESTARTS AGE
httpbin-774999875d-zbczh 1/1 Running 0 2m49s
keycloak-758d7c758-27pgh 1/1 Running 0 2m49s
oauth2-proxy-5875dd67db-8qwqn 0/1 CrashLoopBackOff 2 2m49s
Logs indicate a network error:
# kubectl logs oauth2-proxy-5875dd67db-8qwqn
[2021/09/22 08:14:56] [main.go:54] Get "http://keycloak.localtest.me/auth/realms/master/.well-known/openid-configuration": dial tcp 127.0.0.1:80: connect: connection refused
I believe I have set up the ingress correctly, though.
Steps to reproduce
Set up the cluster:
#Creare kind cluster
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/kubernetes/kind-cluster.yaml
kind create cluster --name oauth2-proxy --config kind-cluster.yaml
#Setup dns
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/kubernetes/custom-dns.yaml
kubectl apply -f custom-dns.yaml
kubectl -n kube-system rollout restart deployment/coredns
kubectl -n kube-system rollout status --timeout 5m deployment/coredns
#Setup ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
kubectl --namespace ingress-nginx rollout status --timeout 5m deployment/ingress-nginx-controller
#Deploy
#import keycloak master realm
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/keycloak/master-realm.json
kubectl create configmap keycloak-import-config --from-file=master-realm.json=master-realm.json
Deploy the test application. My deployment.yaml file:
###############oauth2-proxy#############
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
name: oauth2-proxy
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
name: oauth2-proxy
template:
metadata:
labels:
name: oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --oidc-issuer-url=http://keycloak.localtest.me/auth/realms/master
- --upstream="file://dev/null"
- --client-id=oauth2-proxy
- --client-secret=72341b6d-7065-4518-a0e4-50ee15025608
- --cookie-secret=x-1vrrMhC-886ITuz8ySNw==
- --email-domain=*
- --scope=openid profile email users
- --cookie-domain=.localtest.me
- --whitelist-domain=.localtest.me
- --pass-authorization-header=true
- --pass-access-token=true
- --pass-user-headers=true
- --set-authorization-header=true
- --set-xauthrequest=true
- --cookie-refresh=1m
- --cookie-expire=30m
- --http-address=0.0.0.0:4180
image: quay.io/oauth2-proxy/oauth2-proxy:latest
# image: "quay.io/pusher/oauth2_proxy:v5.1.0"
name: oauth2-proxy
ports:
- containerPort: 4180
name: http
protocol: TCP
livenessProbe:
httpGet:
path: /ping
port: http
scheme: HTTP
initialDelaySeconds: 0
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /ping
port: http
scheme: HTTP
initialDelaySeconds: 0
timeoutSeconds: 1
successThreshold: 1
periodSeconds: 10
resources:
{}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
type: ClusterIP
ports:
- port: 4180
targetPort: 4180
name: http
selector:
name: oauth2-proxy
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
large_client_header_buffers 4 32k;
spec:
rules:
- host: oauth2-proxy.localtest.me
http:
paths:
- path: /
backend:
serviceName: oauth2-proxy
servicePort: 4180
---
# ######################httpbin##################
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
spec:
replicas: 1
selector:
matchLabels:
name: httpbin
template:
metadata:
labels:
name: httpbin
spec:
containers:
- image: kennethreitz/httpbin:latest
name: httpbin
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
hostname: httpbin
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: httpbin-svc
labels:
app: httpbin
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
name: httpbin
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: httpbin
labels:
name: httpbin
annotations:
nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-User,X-Auth-Request-Email
nginx.ingress.kubernetes.io/auth-signin: http://oauth2-proxy.localtest.me/oauth2/start
nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy.localtest.me/oauth2/auth
spec:
rules:
- host: httpbin.localtest.me
http:
paths:
- path: /
backend:
serviceName: httpbin-svc
servicePort: 80
---
# ######################keycloak#############
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: keycloak
name: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- args:
- -Dkeycloak.migration.action=import
- -Dkeycloak.migration.provider=singleFile
- -Dkeycloak.migration.file=/etc/keycloak_import/master-realm.json
- -Dkeycloak.migration.strategy=IGNORE_EXISTING
env:
- name: KEYCLOAK_PASSWORD
value: password
- name: KEYCLOAK_USER
value: admin#example.com
- name: KEYCLOAK_HOSTNAME
value: keycloak.localtest.me
- name: PROXY_ADDRESS_FORWARDING
value: "true"
image: quay.io/keycloak/keycloak:15.0.2
# image: jboss/keycloak:10.0.0
name: keycloak
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumeMounts:
- mountPath: /etc/keycloak_import
name: keycloak-config
hostname: keycloak
volumes:
- configMap:
defaultMode: 420
name: keycloak-import-config
name: keycloak-config
---
apiVersion: v1
kind: Service
metadata:
name: keycloak-svc
labels:
app: keycloak
spec:
type: ClusterIP
sessionAffinity: None
ports:
- name: http
targetPort: http
port: 8080
selector:
app: keycloak
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: keycloak
spec:
tls:
- hosts:
- "keycloak.localtest.me"
rules:
- host: "keycloak.localtest.me"
http:
paths:
- path: /
backend:
serviceName: keycloak-svc
servicePort: 8080
---
# kubectl apply -f deployment.yaml
Configure /etc/hosts on the development machine file to include localtest.me domain:
127.0.0.1 oauth2-proxy.localtest.me
127.0.0.1 keycloak.localtest.me
127.0.0.1 httpbin.localtest.me
127.0.0.1 localhost
Note that I can reach http://keycloak.localtest.me/auth/realms/master/.well-known/openid-configuration with no problem from my host browser. It appears that the oauth2-proxy's pod cannot reach the service via the ingress. Would really appreciate any sort of help here.
Turned out that I needed to add keycloak to custom-dns.yaml.
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
hosts {
10.244.0.1 dex.localtest.me. # <----Configured for dex
10.244.0.1 oauth2-proxy.localtest.me
fallthrough
}
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
Added keycloak showed as below:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
hosts {
10.244.0.1 keycloak.localtest.me
10.244.0.1 oauth2-proxy.localtest.me
fallthrough
}
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
I am new to Kubernetes. I have this scenario for multi-tenancy
1) I have 3 namespaces as shown here:
default,
tenant1-namespace,
tenant2-namespace
2) namespace default has two database pods
tenant1-db - listening on port 5432
tenant2-db - listening on port 5432
namespace tenant1-ns has one app pod
tenant1-app - listening on port 8085
namespace tenant2-ns has one app pod
tenant2-app - listening on port 8085
3) I have applied 3 network policies on default namespace
a) to restrict access to both db pods from other namespaces
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
b) to allow access to tenant1-db pod from tenant1-app of tenant1-ns only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-1
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant1-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant1-development
- podSelector:
matchLabels:
app: tenant1-app
c) to allow access to tenant2-db pod from tenant2-app of tenant2-ns only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-2
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant2-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant2-development
- podSelector:
matchLabels:
app: tenant2-app
I want to restrict access of tenant1-db to tenant1-app only, tenant2-db to tenant2-app only. But it seems tenant2-app can access tenant1-db which should not happen.
Below is db-config.js for tenant2-app
module.exports = {
HOST: "tenant1-db",
USER: "postgres",
PASSWORD: "postgres",
DB: "tenant1db",
dialect: "postgres",
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
}
};
As you can see I am pointing tenant2-app to use tenant1-db, I want to restrict tennat1-db to tenant1-app only? what modifications needs to do in network policies ?
Updates :
tenant1 deployment & Services yamls
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: tenant1-app-deployment
namespace: tenant1-namespace
spec:
selector:
matchLabels:
app: tenant1-app
replicas: 1 # tells deployment to run 1 pods matching the template
template:
metadata:
labels:
app: tenant1-app
spec:
containers:
- name: tenant1-app-container
image: tenant1-app-dock-img:v1
ports:
- containerPort: 8085
---
# https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
kind: Service
apiVersion: v1
metadata:
name: tenant1-app-service
namespace: tenant1-namespace
spec:
selector:
app: tenant1-app
ports:
- protocol: TCP
port: 8085
targetPort: 8085
nodePort: 31005
type: LoadBalancer
tenant2-app deployments & service yamls
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: tenant2-app-deployment
namespace: tenant2-namespace
spec:
selector:
matchLabels:
app: tenant2-app
replicas: 1 # tells deployment to run 1 pods matching the template
template:
metadata:
labels:
app: tenant2-app
spec:
containers:
- name: tenant2-app-container
image: tenant2-app-dock-img:v1
ports:
- containerPort: 8085
---
# https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
kind: Service
apiVersion: v1
metadata:
name: tenant2-app-service
namespace: tenant2-namespace
spec:
selector:
app: tenant2-app
ports:
- protocol: TCP
port: 8085
targetPort: 8085
nodePort: 31006
type: LoadBalancer
Updates 2 :
db-pod1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
k8s-app: tenant1-db
name: tenant1-db
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: tenant1-db
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: tenant1-db
name: tenant1-db
spec:
volumes:
- name: tenant1-pv-storage
persistentVolumeClaim:
claimName: tenant1-pv-claim
containers:
- env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: tenant1db
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
image: postgres:11.5-alpine
imagePullPolicy: IfNotPresent
name: tenant1-db
volumeMounts:
- mountPath: "/var/lib/postgresql/data/pgdata"
name: tenant1-pv-storage
resources: {}
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
db-pod2.ymal
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
k8s-app: tenant2-db
name: tenant2-db
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: tenant2-db
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: tenant2-db
name: tenant2-db
spec:
volumes:
- name: tenant2-pv-storage
persistentVolumeClaim:
claimName: tenant2-pv-claim
containers:
- env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: tenant2db
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
image: postgres:11.5-alpine
imagePullPolicy: IfNotPresent
name: tenant2-db
volumeMounts:
- mountPath: "/var/lib/postgresql/data/pgdata"
name: tenant2-pv-storage
resources: {}
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
Update 3 :
kubectl get svc -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d2h
nginx ClusterIP 10.100.24.46 <none> 80/TCP 5d1h
tenant1-db LoadBalancer 10.111.165.169 10.111.165.169 5432:30810/TCP 4d22h
tenant2-db LoadBalancer 10.101.75.77 10.101.75.77 5432:30811/TCP 2d22h
kubectl get svc -n tenant1-namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tenant1-app-service LoadBalancer 10.111.200.49 10.111.200.49 8085:31005/TCP 3d
tenant1-db ExternalName <none> tenant1-db.default.svc.cluster.local 5432/TCP 2d23h
kubectl get svc -n tenant2-namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tenant1-db ExternalName <none> tenant1-db.default.svc.cluster.local 5432/TCP 2d23h
tenant2-app-service LoadBalancer 10.99.139.18 10.99.139.18 8085:31006/TCP 2d23h
Referring from the docs Let's understand the below policy that you have for tenant2.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-2
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant2-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: development
- podSelector:
matchLabels:
app: tenant2-app
The above network policy that you have defined has two elements in the form array which says allow connections from Pods in the local (default) namespace with the label app=tenant2-app, or from any Pod in any namespace with the label name=development.
If you merge the rules into a single rule as below it should solve the issue.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-2
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant2-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant2-development
podSelector:
matchLabels:
app: tenant2-app
Above network policy means allow connections from Pods with the label app=tenant2-app in namespaces with the label name=tenant2-development.
Add a label name=tenant2-development to the tenant2-ns namespace.
Do the same exercise for tenant1 as well as bellow:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-1
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant1-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant1-development
podSelector:
matchLabels:
app: tenant1-app
Add a label name=tenant1-development to the tenant1-ns namespace.
I am having trouble egetting through with gRPC from external to istio-ingress on Kubernetes.
But I have come so fare, that I can get a respons 200. Now I expect that HTTP/1.1 is curl not supporting http2.
Any help is appreciated, thanks!
HTTP/1.1 200 OK
content-type: application/grpc
grpc-status: 8
grpc-message: malformed method name: "/ghw"
x-envoy-upstream-service-time: 8
date: Thu, 03 May 2018 18:33:28 GMT
server: envoy
content-length: 0
The yaml setup is as following:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: grpc-deployment
labels:
app: grpc
spec:
selector:
matchLabels:
app: grpc
replicas: 1
template:
metadata:
labels:
app: grpc
spec:
containers:
- name: grpc
image: local/gcd
imagePullPolicy: Never
ports:
- name: grpc-port
containerPort: 3000
# protocol: HTTP2
---
apiVersion: v1
kind: Service
metadata:
name: grpc-service
spec:
# type: LoadBalancer
selector:
app: grpc
ports:
- port: 3000
name: grpc
# protocol: HTTP2
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grpc-ingress
annotations:
kubernetes.io/ingress.class: "istio"
# ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- http:
paths:
- path: /ghw
backend:
serviceName: grpc-service
servicePort: 3000
So I'm having zipkin gathering my data inside kubernetes from other services. I'm having nginx ingress controller defined to expose my services and all works nice. As zipkin is admin thing I'd love to have it behind some security ie. basic auth. If I add 3 lines marked as "#problematic lines - start" and "#problematic lines - stop" below my zipkin front is no longer visible and I get 503.
It's created with https://github.com/kubernetes/ingress/tree/master/examples/auth/basic/nginx
and no difficult things here.
apiVersion: v1
kind: Service
metadata:
name: zipkin
labels:
app: zipkin
tier: monitor
spec:
ports:
- port: 9411
targetPort: 9411
selector:
app: zipkin
tier: monitor
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: zipkin
spec:
replicas: 1
template:
metadata:
labels:
app: zipkin
tier: monitor
spec:
containers:
- name: zipkin
image: openzipkin/zipkin
resources:
requests:
memory: "300Mi"
cpu: "100m"
limits:
memory: "500Mi"
cpu: "250m"
ports:
- containerPort: 9411
---
apiVersion: v1
kind: Service
metadata:
name: zipkin-ui
labels:
app: zipkin-ui
tier: monitor
spec:
ports:
- port: 80
targetPort: 80
selector:
app: zipkin-ui
tier: monitor
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: zipkin-ui
spec:
replicas: 1
template:
metadata:
labels:
app: zipkin-ui
tier: monitor
spec:
containers:
- name: zipkin-ui
image: openzipkin/zipkin-ui
resources:
requests:
memory: "300Mi"
cpu: "100m"
limits:
memory: "500Mi"
cpu: "250m"
ports:
- containerPort: 80
env:
- name: ZIPKIN_BASE_URL
value: "http://zipkin:9411"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: zipkin
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/enable-cors: "true"
ingress.kubernetes.io/ssl-redirect: "false"
#problematic lines - start
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-secret: basic-auth
ingress.kubernetes.io/auth-realm: "Authentication Required"
#problematic lines - stop
spec:
rules:
- host: "zipkin.lalala.com"
http:
paths:
- path: /
backend:
serviceName: zipkin-ui
servicePort: 80
I'm not sure if it's not about possible infulence but I used https://github.com/kubernetes/ingress/blob/master/controllers/nginx/rootfs/etc/nginx/nginx.conf file as template for my nginx ingress controller as I needed to modify some CORS rules. I see there part:
{{ if $location.BasicDigestAuth.Secured }}
{{ if eq $location.BasicDigestAuth.Type "basic" }}
auth_basic "{{ $location.BasicDigestAuth.Realm }}";
auth_basic_user_file {{ $location.BasicDigestAuth.File }};
{{ else }}
auth_digest "{{ $location.BasicDigestAuth.Realm }}";
auth_digest_user_file {{ $location.BasicDigestAuth.File }};
{{ end }}
proxy_set_header Authorization "";
{{ end }}
but I don't see result in: kubectl exec nginx-ingress-controller-lalala-lalala -n kube-system cat /etc/nginx/nginx.conf | grep auth. Due to this my guess is that I need to add some annotation to make this {{ if $location.BasicDigestAuth.Secured }} part work. Unfortunately I cannot find anything about it.
I have the same config running on my ingress 9.0-beta.11. I guess it's just a misconfiguration.
First I'll recommend you to not change the template and use the default values and just change when the basic-auth works.
What the logs of ingress show to you? Did you create the basic-auth file in the same namespace of the ingress resource?