I'm new to Ingress in Kubernetes. I installed Ingress NGINX controller in my AKS cluster. My application runs in a pod (Port 80) in testns (Namespace). I created a service for it in the same namespace. Now I want to access my application from outside. So I created an Ingress yaml for it.
service.yaml
kubectl get svc -A | grep my-svc
testns my-svc ClusterIP 10.0.116.192 <none> 80/TCP 93m
ingress
Name: example-ingress
Namespace: testns
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/apple my-svc:80 (10.244.1.33:80)
Annotations: ingress.kubernetes.io/rewrite-target: /
Events: <none>
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /apple
backend:
serviceName: my-svc
servicePort: 80
When I access using the <IP_ADDRESS_OF_THE_INGRESS>/apple, it gives an 404 error. I want to know why the application can't be accessed.
Try adding the ingress class into YAML config kubernetes.io/ingress.class: "nginx"
as annotation
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
Example
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /apple
backend:
serviceName: my-svc
servicePort: 80
Related
I have created a sample .Net Core WebApi and pushed the images to ACR. Now I am deploying it to AKS with Nginx Ingress Controller using Ingress Resources pointing to ClusterIP Service that points to Deployed Pods running the image.
Issue is when I change ClusterIP service to LoadBalancer to hit it directly for testing, I get the results from WebApi. But when I change it back to ClusterIP and hit using Nginx Ingress Controller IP address, I always get 404 Not Found.
Below is the code. Please suggest.
apiVersion: apps/v1
kind: Deployment
metadata:
name: weather-forecast-webapi-deployment
namespace: development
spec:
replicas: 1
selector:
matchLabels:
app: weather-forecast-webapi-pod
template:
metadata:
labels:
app: weather-forecast-webapi-pod
spec:
containers:
- name: weather-forecast-webapi-container
image: employeeconnectacr.azurecr.io/demoapi:latest
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: weather-forecast-webapi-service-clusterip
namespace: development
spec:
ports:
- port: 80
targetPort: 80
selector:
app: weather-forecast-webapi-pod
kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
name: econnect-ingress
namespace: development
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/ssl-redirect: 'false'
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- http:
paths:
- path: /demo
pathType: Prefix
backend:
serviceName: weather-forecast-webapi-service-clusterip
servicePort: 80
status:
loadBalancer:
ingress:
- ip: 52.141.219.175
Looks like you messed up your ingress object. I assume you want to rewrite the /demo path to / so that paths like: /demo/foo/bar are rewritten to /foo/bar.
Here is the rewrite explained.
Here is the example:
kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
name: econnect-ingress
namespace: development
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: 'false'
spec:
rules:
- http:
paths:
- path: /demo(/|$)(.*)
pathType: Prefix
backend:
serviceName: weather-forecast-webapi-service-clusterip
servicePort: 80
Notice that all I changed is path and rewtire-tager group number. In /demo(/|$)(.*) the brackets () create a group that is referenced in rewrite-target: /$2. The $1 is referencing the first group: / or end of string, and the second group is everything after it; so you copy everything after the /demo/ and make it a new path.
I have different applications running in the same Kubernetes Cluster.
I would like multiple domains to access my Kubernetes Cluster and be redirected depending the domain.
For each domain I would like different annotations/configuration.
Without the annotations I have an ingress deployment such as:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor
namespace: myapps
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
type: LoadBalancer
tls:
- hosts:
- foo.bar.dev
- bar.foo.dev
secretName: tls-secret
rules:
- host: foo.bar.dev
http:
paths:
- backend:
serviceName: foobar
servicePort: 9000
path: /(.*)
- host: bar.foo.dev
http:
paths:
- backend:
serviceName: varfoo
servicePort: 80
path: /(.*)
But They need to have multiple configuration, for example, one need to have the following annotation
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "PHPSESSID"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
And another would have this one
nginx.ingress.kubernetes.io/backend-protocol: "FCGI"
nginx.ingress.kubernetes.io/fastcgi-index: "index.php"
nginx.ingress.kubernetes.io/fastcgi-params-configmap: "example-cm"
Those configurations are not compatible, and I can't find a way to specify a configuration by host.
I also understand that it's impossible to have 2 Ingress serving External HTTP request.
So what am I not understanding / doing wrong ?
I also understand that it's impossible to have 2 Ingress serving External HTTP request
I am not sure where you've found this but you totally can do this.
You should be able to create two separate ingress objects like following:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor-bar
namespace: myapps
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "PHPSESSID"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
type: LoadBalancer
tls:
- hosts:
- bar.foo.dev
secretName: tls-secret-bar
rules:
- host: bar.foo.dev
http:
paths:
- backend:
serviceName: barfoo
servicePort: 80
path: /(.*)
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor-foo
namespace: myapps
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/backend-protocol: "FCGI"
nginx.ingress.kubernetes.io/fastcgi-index: "index.php"
nginx.ingress.kubernetes.io/fastcgi-params-configmap: "example-cm"
spec:
type: LoadBalancer
tls:
- hosts:
- foo.bar.dev
secretName: tls-secret-foo
rules:
- host: foo.bar.dev
http:
paths:
- backend:
serviceName: foobar
servicePort: 9000
path: /(.*)
This is a completely valid ingress configuration, and most probably the only valid one that will solve your problem.
Each ingress object configures one domain.
I have no idea what's the problem for my case.
I deploy an Prometheus server on AKS(Azure's k8s) and want to expose the Prometheus web UI through ingress controller for the following config.
And I also refer this
https://coreos.com/operators/prometheus/docs/latest/user-guides/exposing-prometheus-and-alertmanager.html
# Prometheus
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
namespace: monitoring
spec:
version: v2.13.1
replicas: 2
retention: 1d
serviceAccountName: prometheus
...
# Service
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: monitoring
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 9090
selector:
app: prometheus
# Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: prometheus-ingress
namespace: monitoring
annotations:
kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/ssl-redirect: "false"
# nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
# Just try another subpath make sure the nginx is work
# - backend:
# serviceName: aks-helloworld-one
# servicePort: 80
# path: /hello-world-one
- backend:
serviceName: prometheus-service
servicePort: 80
path: /prometheus
I have added another path in my test stage, the nginx work successfully for aks-helloworld-one.
However, it not work for Prometheus server, I always got "404 page not found" in return.
Does anyone know how to solve this problem?
I think I know where the problem is. Actually my previous comment was wrong:
If it's available under /, you don't need any rewrites in your ingress
and it should work straight away.
Well... actually it won't for one simple reason. When you try to access your Prometheus UI via ingress, you use <ingress ip>/prometheus URL. Choosing /prometheus path redirects you correctly to the appropriate backend Service:
- backend:
serviceName: prometheus-service
servicePort: 80
path: /prometheus
But the problem occurs because the /prometheus path (that you need to be redirected to the prometheus-service and eventually to one of the Pods, exposed by this Service) gets forwarded to the target Pod serving the actual content.
You get 404 page not found error message because the http request that gets to the target webserver is claiming for the content of /prometheus directory instead of / from which it is actually served.
So if you change your ingress to something like this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: prometheus-ingress
namespace: monitoring
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- backend:
serviceName: prometheus-service
servicePort: 80
path: /
most probably everything would work as expected and you'll get the Prometheus UI website rather than 404 Not found.
Well, although it may work, it's good only for debugging purposes as no one wants to use ingress just to be able to expose something under root path.
The following ingress definition should resolve your problem (Yes, rewrites are necessary in this scenario!):
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: prometheus-ingress
namespace: monitoring
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- backend:
serviceName: prometheus-service
servicePort: 80
path: /prometheus(/|$)(.*)
Rewrite that was used above will ensure that the original access path /prometheus gets rewritten to / before reaching the target Pod.
I am deploying the cluster in a AWS, I did this configuration with Helm Charts and it worked for me.
Ingress: Notice nginx.ingress.kubernetes.io/rewrite-target: /$2 and path: /prometheus(/|$)(.*) in this file as mentioned in other answers.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prometheus-ingress
namespace: monitoring
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-read-timeout: "12h"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
cert-manager.io/acme-challenge-type: "http01"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- myhost.cl
secretName: mysecret-tls
rules:
- host: myhost.cl
http:
paths:
- path: /prometheus(/|$)(.*)
pathType: Prefix
backend:
service:
name: monitoring-kube-prometheus-prometheus
port:
number: 9090
Prometheus: There is a label inside spec called externalUrl that makes you redirect whatever you want when you access in browser.
# Source: kube-prometheus-stack/templates/prometheus/prometheus.yaml
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: monitoring-kube-prometheus-prometheus
namespace: monitoring
labels:
app: kube-prometheus-stack-prometheus
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: monitoring
app.kubernetes.io/version: "19.0.2"
app.kubernetes.io/part-of: kube-prometheus-stack
chart: kube-prometheus-stack-19.0.2
release: "monitoring"
heritage: "Helm"
prometheus: devops
spec:
# ... Your data here
externalUrl: http://myhost.cl/prometheus
Also be careful to not expose your prometheus instance and at least use an authentication method like basic auth, here's something that can help you with that: https://kubernetes.github.io/ingress-nginx/examples/auth/basic/
I think issue is with your rewrite-target & path, can you try with a sub-domain, rather than path to confirm? I have the following Prometheus Ingress working on Subdomain
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx-ingress
name: prometheus-monitoring
namespace: monitoring
spec:
backend:
serviceName: prometheus
servicePort: 9090 #As my service is listening on 9090
rules:
- host: prometheus-monitoring.DOMAIN
http:
paths:
- backend:
serviceName: prometheus
servicePort: 9090 #As my service is listening on 9090
path: /
And below is my service manifest
apiVersion: v1
kind: Service
metadata:
labels:
app: prometheus
chart: prometheus-operator-5.11.0
heritage: Tiller
name: prometheus
namespace: monitoring
spec:
ports:
- name: web
port: 9090
protocol: TCP
targetPort: 9090
selector:
app: prometheus
prometheus: k8s
sessionAffinity: None
type: ClusterIP
I have an app running in a kubernetes cluster that uses TLS and oauth2 authentication as part of the Nginx ingress. It all runs fine but I now want to split my ingresses so that I have a master and a number of minions, making sure that all the authentication is handles for the complete host domain. When I do this the forced signin breaks. I can still reach it if I add the path manually but it is no longer required to reach the application. Is this possible to solve?
Example
Regular ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-app-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/auth-url: "https://my-app.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://my-app.com/oauth2/start?rd=https%3A%2F%2F$host$request_uri"
spec:
tls:
- secretName: my-app-com-tls
hosts:
- my-app.com
rules:
- host: my-app.com
http:
paths:
- path: /(.*)
backend:
serviceName: my-app
servicePort: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: oauth2-proxy
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
labels:
app: oauth2-proxy
app.kubernetes.io/managed-by: Helm
chart: oauth2-proxy-3.1.0
heritage: Helm
release: oauth2-proxy
spec:
rules:
- host: my-app.com
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 80
path: /oauth2
tls:
- hosts:
- my-app.com
secretName: my-app-com-tls
Master - minion
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-app-ingress-master
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/mergeable-ingress-type: "master"
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/auth-url: "https://my-app.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://my-app.com/oauth2/start?rd=https%3A%2F%2F$host$request_uri"
spec:
tls:
- secretName: my-app-com-tls
hosts:
- my-app.com
rules:
- host: my-app.com
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-app-ingress-minion
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/mergeable-ingress-type: "minion"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: my-app.com
http:
paths:
- path: /(.*)
backend:
serviceName: my-app
servicePort: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: oauth2-proxy
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/mergeable-ingress-type: minion
labels:
app: oauth2-proxy
app.kubernetes.io/managed-by: Helm
chart: oauth2-proxy-3.1.0
heritage: Helm
release: oauth2-proxy
spec:
rules:
- host: my-app.com
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 80
path: /oauth2
It turns out that I had unintentionally found features that were defined in two different nginx-ingress-controller packages (nginxinc and kubernetes). So the reason that it breaks is simply that there is no support for master - minion hierarchy in the controller I am actually using in my cluster. And there seems not to be any support for the authentication in the other package.
I have created a feature suggestion.
I have an app in Kubernetes which is served over https. So now I would like to exclude one URL from that rule and use HTTP to serve it for performance reasons. I am struggling with that the whole day and it seems impossible.
These are my ingress YAML:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
field.cattle.io/publicEndpoints: '[{"addresses":["172.31.1.11"],"port":443,"protocol":"HTTPS","serviceName":"myservice:myservice","ingressName":"myservice:myservice","hostname":"app.server.test.mycompany.com","path":"/","allNodes":true}]'
kubernetes.io/ingress.class: nginx
creationTimestamp: "2020-02-17T13:14:19Z"
generation: 1
labels:
app-kubernetes-io/instance: mycompany
app-kubernetes-io/managed-by: Tiller
app-kubernetes-io/name: mycompany
helm.sh/chart: mycompany-1.0.0
io.cattle.field/appId: mycompany
name: mycompany
namespace: mycompany
resourceVersion: "565608"
selfLink: /apis/extensions/v1beta1/namespaces/mycompany/ingresses/mycompany
uid: c6b93108-a28f-4de6-a62b-487708b3f5d1
spec:
rules:
- host: app.server.test.mycompany.com
http:
paths:
- backend:
serviceName: mycompany
servicePort: 80
path: /
tls:
- hosts:
- app.server.test.mycompany.com
secretName: mycompany-tls-secret
status:
loadBalancer:
ingress:
- ip: 172.31.1.11
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
field.cattle.io/publicEndpoints: '[{"addresses":["172.31.1.1"],"port":80,"protocol":"HTTP","serviceName":"mycompany:mycompany","ingressName":"mycompany:mycompany-particular-service","hostname":"app.server.test.mycompany.com","path":"/account_name/particular_service/","allNodes":true}]'
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
creationTimestamp: "2020-02-17T13:14:19Z"
generation: 1
labels:
app-kubernetes-io/instance: mycompany
app-kubernetes-io/managed-by: Tiller
app-kubernetes-io/name: mycompany
helm.sh/chart: mycompany-1.0.0
io.cattle.field/appId: mycompany
name: mycompany-particular-service
namespace: mycompany
resourceVersion: "565609"
selfLink: /apis/extensions/v1beta1/namespaces/mycompany/ingresses/mycompany-particular-service
uid: 88127a02-e0d1-4b2f-b226-5e8d160c1654
spec:
rules:
- host: app.server.test.mycompany.com
http:
paths:
- backend:
serviceName: mycompany
servicePort: 80
path: /account_name/particular_service/
status:
loadBalancer:
ingress:
- ip: 172.31.1.11
So as you can see from above I would like to server /particular_service/ over HTTP. Ingress, however, redirects to HTTPS as TLS is enabled for that host in the first ingress.
Is there any way to disable TLS just for that one specific path when the same host is being used for configuration?
In short summary I would like to have:
https://app.server.test.mycompany.com
but
http://app.server.test.mycompany.com/account_name/particular_service/
I've tested with 2 ingress of the same domain, the first one with tls enabled and the second without tls and it worked.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
name: echo-https
spec:
tls:
- hosts:
- myapp.mydomain.com
secretName: https-myapp.mydomain.com
rules:
- host: myapp.mydomain.com
http:
paths:
- backend:
serviceName: echo-svc
servicePort: 80
path: /
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
name: echo-http
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- backend:
serviceName: echo-svc
servicePort: 80
path: /insecure
By the Nginx docs:
By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.
This can be disabled globally using ssl-redirect: "false" in the NGINX config map, or per-Ingress with the nginx.ingress.kubernetes.io/ssl-redirect: "false" annotation in the particular resource.
Please let me if that helps.
Also add nginx.ingress.kubernetes.io/ssl-redirect ": "false". It had worked for me previously. You can give it a try.