308 Redirect Loop with ExternalName Service Using Ingress-NGINX - nginx

I'm using ingress-nginx-controller (0.32.0) and am attempting to point an ExternalName service at a URL and yet it’s stuck in a loop of 308 Redirects. I've seen plenty of issues out there and I figure there’s just one thing off with my configuration. Is there something really small that I'm missing here?
ConfigMap for NGINX Configuration:
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
use-proxy-protocol: "true"
use-forwarded-headers: "true"
proxy-real-ip-cidr: "0.0.0.0/0" # restrict this to the IP addresses of ELB
proxy-read-timeout: "3600"
proxy-send-timeout: "3600"
backend-protocol: "HTTPS"
ssl-redirect: "false"
http-snippet: |
map true $pass_access_scheme {
default "https";
}
map true $pass_port {
default 443;
}
server {
listen 8080 proxy_protocol;
return 308 https://$host$request_uri;
}
NGINX Service:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "XXX"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: http
Ingress Definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-members-portal
namespace: dev
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: foo-111.dev.bar.com
http:
paths:
- path: /login
backend:
serviceName: foo-service
servicePort: 80
ExternalName Service:
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: ExternalName
externalName: foo.staging.bar.com
selector:
app: foo
EDIT
I figured it out! I wanted to point to a service in another namespace, so I changed the ExternalName service to this:
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: ExternalName
externalName: foo-service.staging.svc.cluster.local
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: foo

I believe the issue you're seeing is due to the fact that your external service isn't working as you think it is. In your ingress definition, you are defining the service to utilize port 80 on your foo-service. In theory, this would redirect you back to your ingress controller's ELB, redirect your request to the https://foo.staging.bar.com address, and move on.
However, external services don't really work that way. Essentially, all externalName will do is run a DNS check with KubeDNS/CoreDNS, and return the CNAME information on that request. It doesn't handle redirects of any kind.
For example, in this case, foo.staging.bar.com:80 would return foo.staging.bar.com:443. You are directing the request for that site to port 80, which in itself directs the request to port 8080 in the ingress controller, which then redirects that request back out to the ELB's port 443. That redirect logic doesn't coexist with the external service.
The problem here, then, is that your app will essentially then try to do this:
http://foo-service:80 --> http://foo.staging.bar.com:80/login --> https://foo.staging.bar.com:443
My expectation on this is that you never actually reach the third step. Why? Well because foo-service:80 is not directing you to port 443, first of all, but second of all...all coreDNS is doing in the backend is running a DNS check against the foo-service's external name, which is foo.staging.bar.com. It's not handling any kind of redirection. So depending on how your host from your app is returned, and handled, your app may never actually get to that site and port. So rather than reach that site, you just have your app keep looping back to http://foo-service:80 for those requests, which will always result in a 308 loopback.
The key here is that foo-service is the host header being sent to the NGINX Ingress controller, not foo.staging.bar.com. So on a redirect to 443, my expectation is that all that is happening, then, is you are hitting foo-service, and any redirects are being improperly sent back to foo-service:80.
A good way to test this is to run curl -L -v http://foo-service:80 and see what happens. That will follow all redirects from that service, and provide you context as to how your ingress controller is handling those requests.
It's really hard to give more information, as I don't have access to your setup directly. However, if you know that your app is always going to be hitting port 443, it would probably be a good fix, in this case, to change your ingress and service to look something like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-members-portal
namespace: dev
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: foo-111.dev.bar.com
http:
paths:
- path: /login
backend:
serviceName: foo-service
servicePort: 443
---
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: ExternalName
externalName: foo.staging.bar.com
That should ensure you don't have any kind of https redirect. Unfortunately, this may also cause issues with ssl validation, but that would be another issue all together. The last piece of I can recommend is to possibly just use foo.staging.bar.com itself, rather than an external service in this case.
For more information, see: https://kubernetes.io/docs/concepts/services-networking/service/#externalname
Hope this helps!

Related

How to setup an ingress controller for kubernetes?

Sorry I am new with Kubernetes and everything...
I have a java back-end in a clusterIP service and a front-end in a NodePort service. I try to make a request to the backend from the front (from the navigator) and it doesn't work.
I saw that I needed to setup an ingress crontroller in order to make it work, but each time I do a "minikube tunnel" and go to my localhost, I get a NGINX 404 error. And the address http://toto.virtualisation doesn't work too (like it doesn't exist).
Here is the setup of my front and my ingress controller in my yaml file :
# Front Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: front-end-deployment
spec:
selector:
matchLabels:
app: front-end
template:
metadata:
labels:
app: front-end
spec:
containers:
- name: front-end-container
image: oxasa/front-end-image:latest
---
# Front Service
apiVersion: v1
kind: Service
metadata:
name: front-end-service
spec:
ports:
- name: http
targetPort: 80
port: 80
type: NodePort
selector:
app: front-end
---
# Front Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: front-end-ingress
spec:
rules:
- host: toto.virtualisation
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: front-end-service
port:
number: 80
If you see anything that needs to be done to make it work...
Try adding
spec:
ingressClassName: nginx
To the Ingress resource to ensure nginx picks up the created ingress.
Also Ingress is not required for service to service Communication. You can use the Kubernetes internal DNS from your Frontend service.
You can make the Frontend access backend by using sth like {service-name}.{namespace}.svc.cluster.local

NGINX Ingress Routing based on Header

I have an nginx-ingress calling a custom auth-service before sending requests to the backend service, using this simple ConfigMap and Ingress:
apiVersion: v1
kind: ConfigMap
metadata:
...
data:
global-auth-url: auth-service-url:8080/authenticate
global-auth-method: GET
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
...
spec:
rules:
- host: host1
http:
paths:
- backend:
serviceName: backend-service
servicePort: 8080
Now I need something different.
How can I send requests, all with the same "Host" header, through different flows, one with auth-service and connected to backend-service1 and the other without any authentication and connecting to backend-service2?
To be clear, and using the custom header "Example-header: test"
If "Example-header" is "test", authenticate via my auth-service before sending to backend-service, as it's done now.
If "Example-header" is not defined, I want to send requests to a different backend service and do not use auth-service in the process.
I tried a couple of things, namely having 2 Ingresses, one with global-auth-url and the other with nginx.ingress.kubernetes.io/enable-global-auth: "false" but the auth-service is always called.
Can I do this with NGINX, or do I have to use Istio or Ambassador?
One way you can achieve this behavior is by abusing the canary feature.
For your backend-service, create a normal Ingress, e. g.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-backend
spec:
ingressClassName: nginx
rules:
- host: localhost
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 80
Create a second Ingress for you auth-service with enabled canary and set the header name and value, e. g.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-auth
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: Example-header
nginx.ingress.kubernetes.io/canary-by-header-value: test
spec:
ingressClassName: nginx
rules:
- host: localhost
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: auth-service
port:
number: 80
Now, every request with Example-header: test routes to auth-service. Any other value, e. g. Example-header: some-value, will not route to auth-service but rather go to your backend-service.

Minikube Ingress Does not resolve but minikube IP does

I am running a simple pod with an image from local image registry in a minikube cluster on Windows 10. I am also running a simple nodeport service. The container is available when I try accessing it from the browser with <minikube_ip>:30080.
However, now I want to set an ingress controller because I want to set up a domain and not access it using the IP. The ingress works for something simple like a basic nginx pod, but does not work for this pod that I'm trying to use. I was previously using jwilder/nginx-proxy in docker-compose and it had some conf files that needed to be attached in the conf.d directory. However, since I am moving to Kubernetes, I thought to totally omit the conf files and the reverse proxy image.
Now after the hosts fie is updated, the domain is reachable via curl, the domain is also pingable, however, it simply cannot be reached on the browser.
pod-yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: RollingUpdate
template:
metadata:
labels:
io.kompose.service: api
spec:
containers:
- env:
- name: DEV_PORT
value: "80"
image: localhost:5000/api:2.3
imagePullPolicy: "IfNotPresent"
name: api
resources: {}
restartPolicy: Always
serviceAccountName: ""
status: {}
Service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: C:\Users\***kompose.exe
convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
selector:
io.kompose.service: api
type: NodePort
ports:
- name: "http"
port: 80
targetPort: 80
nodePort: 30080
Ingress.yaml
apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1
kind: Ingress
metadata:
name: tls-ingress
spec:
tls:
- secretName: oaky-tls
hosts:
- api.localhost
rules:
- host: api.localhost
http:
paths:
- path: /
backend:
serviceName: api
servicePort: 80
I have checked and the TLS secret is available, I am not understanding the issue here and would really appreciate some help.
Solved:
Chrome was overlooking the etc hosts file, so I did the following:
Switched to Firefox and instantly the URLs were working.
Added annotations to denote the class:
kubernetes.io/ingress.class: nginx
Added annotations to make sure requests are redirected to ssl
nginx.ingress.kubernetes.io/ssl-redirect: "true"

Kubernetes ingress-nginx - How can I disable listening on https if no TLS configured?

I'm using kubernetes ingress-nginx and this is my Ingress spec. http://example.com works fine as expected. But when I go to https://example.com it still works, but pointing to default-backend with Fake Ingress Controller certificate. How can I disable this behaviour? I want to disable listening on https at all on this particular ingress, since there is no TLS configured.
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: http-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: my-deployment
servicePort: 80
I've tried this nginx.ingress.kubernetes.io/ssl-redirect: "false" annotation. However this has no effect.
I'm not aware of an ingress-nginx configmap value or ingress annotation to easily disable TLS.
You could remove port 443 from your ingress controllers service definition.
Remove the https entry from the spec.ports array
apiVersion: v1
kind: Service
metadata:
name: mingress-nginx-ingress-controller
spec:
ports:
- name: https
nodePort: NNNNN
port: 443
protocol: TCP
targetPort: https
nginx will still be listening on a TLS port, but no clients outside the cluster will be able to connect to it.
Redirection is not involved in your problem.
ingress-controller is listening on both port, 80 and 443. When you configure an ingress with only 80 port, if you reach the 443 port you are redirected to the default backend, which is expected behaviour.
A solution is to add an other nginx-controller, that will only listen on 80 port. And then you can configure your ingresses with kubernetes.io/ingress.class: myingress.
When creating the new nginx-controller, change the command --ingress-class=myingress of the daemonset. It will then handle only ingress annotated with this class.
If you use helm to deploy it, simply override the controller.ingressClass value.

Nginx headers not appearing with Ingress controller

I'm having a problem where my NGINX headers are not showing when I connect to the ingress controller's IP. For example, let's say I have a HTTP header called "x-Header" setup in my NGINX config map. If I go to the ingress controller's IP I don't see the header, but when I go to the NGINX load balancer IP, I see the headers. Also when I go to the ingress IP, I get the correct cert, but the NGINX IP gives me a self signed cert. I don't think I have the controller setup right.
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.global-static-ip-name: staticip
labels:
app: exam
spec:
tls:
- hosts:
- www.example.com
secretName: tls-secret
backend:
serviceName: example-app
servicePort: 80
Ingress node:
apiVersion: v1
kind: Service
metadata:
name: exam-node
labels:
app: exam
spec:
type: NodePort
selector:
app: example-app
tier: backend
ports:
- port: 80
targetPort: 80
To create the NGINX controller I used the command helm install stable/nginx-ingress --name nginx-ingress --set rbac.create=true
My nginx controller logs say this:
2018-07-29 19:57:11.000 CDT
updating Ingress default/example-ingress status to [{12.346.151.45 }]
It seems the nginx controller knows about ingress but ingress doesn't know about the controller. I am probably confused on how Ingress and NGINX work together.

Resources