We are using the Nginx ingress controller in Azure Kubernetes Service to direct traffic to a number of .NET Apis that we run there.
All calls to this are routed via the Azure Application Gateway for WAF and DNS reasons.
Application gateway has "health probes" that hit your backend pools (which point to the external IP of our nginx ingress controller service) performing a GET at the root.
Previously we had services for each site, setup as LoadBalancer, which gave each site their own external IP address, and we pointed the backend pool to that and it worked fine.
But now we are trying to do things more securely and route all calls via the Ingress Controller... but now we have one backend pool with the ingress controller's IP address, and as there's nothing there the health probe comes back unhealthy, and the site doesn't work.
I have setup the Ingress for the site so that if a request hits the backend pool with the domain (below) it will work, but the health probe doesn't seem to do that. As it is just doing a GET on the IP address of the controller.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: "api.mydomain.com"
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: my-api-service
port:
number: 443
I installed the controller using the Helm chart, and I just want to be able to set it so that a GET request to that controller will just return 200 and any other request will be directed appropriately. I had tried the below for our ingress, to route a call to the root to the api (which has a 200 response at its root) but I don't think that was the right place for it, and it didn't work. It might have to be part of the Helm command to setup the Ingress controller itself.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-api-service
port:
number: 443
- host: "api.mydomain.com"
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: my-api-service
port:
number: 443
The nginx ingress controller exposes a default backend /healthz endpoint which returns 200 OK. You can make your App gateway health probe to point to this endpoint.
Also, instead of using App gateway + NGINX ingress controller which require 2 hops before reaching your service, consider using Application Gateway ingress controller (AGIC).
Related
I have a domain at Cloudflare and some wildcards for subdomains
which both point to the load balancer of an nginx ingress of a Kubernetes cluster (GKE) of the GCP. Now, we have two pods and services running each (echo1 and echo2, which are essentially identical) and when I apply an ingress
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: "echo1.eu3.example.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: echo1
port:
number: 80
- host: "echo2.example.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: echo2
port:
number: 80
I can reach echo2 under echo2.example.com, but not echo1.eu3.example.com. My question is how I can make the second one reachable as well.
I can advise you to make some check.
Just set the Proxy status for "echo1.eu3.example.com" as DNS only. Then check the access. If ok - install certificates in kubernetes via cert manager. We faced some times with this issue and resolved by using 3 deep domains. For instance "echo1-eu3.example.com". It seems cloudfront does not like such domains :) Of course if someone write a solution how to work with deep domains in cloudfront - it would be good practice for us :)
I have defined my service app running on port 9000. It is not web/http server it is simply just a service application running as windows service on that port to which other apps connect to (outside the container).
So I have defined Port 9000 in my service definition and in my config map definition. We are using NGINX as a proxy for accessing from outside and everything works.
Nginx Service:
- name: 9000-tcp
nodePort: 30758
port: 9000
protocol: TCP
targetPort: 9000
Config Map:
apiVersion: v1
data:
"9000": default/frontarena-ads-aks-test:9000
kind: ConfigMap
Service definition:
apiVersion: v1
kind: Service
metadata:
name: frontarena-ads-aks-test
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 9000
selector:
app: frontarena-ads-aks-test
Ingress definition:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ads-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontarena-ads-aks-test
servicePort: 9000
As mentioned everything works. I know that TCP is used for L4 layer and HTTP for L7 Application Layer.
I need to access my app from another app solely by its hostname and port. Without any HTTP Url.
So basically does it mean that I do NOT need actually my Ingress Controller definition at all?
I do not need to deploy it at all?
I would only need it if I need HTTP access with some URL for example: hostname:port/pathA or hostname:port/pathB
Is that correct? For regular TCP connection we do not need at all our Ingress YAML definition? Thank you
Yes, you don't need ingress at all in this case. According to kubernetes official doc, Ingress is:
An API object that manages external access to the services in a cluster, typically HTTP.
So, if you don't need any external access via http, you can omit ingress.
Ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
I have setup an ingress for an application but want to whitelist my ip address. So I created this Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: ${MY_IP}/32
name: ${INGRESS_NAME}
spec:
rules:
- host: ${DNS_NAME}
http:
paths:
- backend:
serviceName: ${SVC_NAME}
servicePort: ${SVC_PORT}
tls:
- hosts:
- ${DNS_NAME}
secretName: tls-secret
But when I try to access it I get a 403 forbidden and in the nginx logging I see a client ip but that is from one of the cluster nodes and not my home ip.
I also created a configmap with this configuration:
data:
use-forwarded-headers: "true"
In the nginx.conf in the container I can see that has been correctly passed on/ configured, but I still get a 403 forbidden with still only the client ip from cluster node.
I am running on an AKS cluster and the nginx ingress controller is behind an Azure loadbalancer. The nginx ingress controller svc is exposed as type loadbalancer and locks in on the nodeport opened by the svc.
Do I need to configure something else within Nginx?
If you've installed nginx-ingress with the Helm chart, you can simply configure your values.yaml file with controller.service.externalTrafficPolicy: Local, which I believe will apply to all of your Services. Otherwise, you can configure specific Services with service.spec.externalTrafficPolicy: Local to achieve the same effect on those specific Services.
Here are some resources to further your understanding:
k8s docs - Preserving the client source IP
k8s docs - Using Source IP
It sounds like you have your Nginx Ingress Controller behind a NodePort (or LoadBalancer) Service, or rather behind a kube-proxy. Generally to get your controller to see the raw connecting IP you will need to deploy it using a hostNetwork port so it listens directly to incoming traffic.
I have a kubernetes cluster setup with two services set up.
Service1 links to Deployment1 and Service2 links to Deployment2.
Deployment1 serves pods which can only be connected to using http.
Deployment2 serves pods which can only be connected to using https.
Using kubectl port-forward and exec'ing into pods I know the services and deployments are responding as they should, connectivity internally between the services is working fine.
I have an nginx ingress setup to allow external connections to both services. The services should only be connected to using https and any incoming connections that are http need to be redirected to https. Here is the ingress setup:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: master-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-production"
spec:
tls:
- secretName: tls-secret-one
hosts:
- service1.domain.com
- service2.domain.com
rules:
- host: "service1.domain.com"
http:
paths:
- path: /
backend:
serviceName: service1
servicePort: 60001
- host: "service2.domain.com"
http:
paths:
- path: /
backend:
serviceName: service2
servicePort: 60002
Here is the problem. With this yaml I can connect to service1 (http backend) with no issues but connecting to service2 (https backend) results in a 502 Bad Gateway.
If I add the annotation ' nginx.ingress.kubernetes.io/backend-protocol: "https" ' the connectivity switches. I can no longer connect to service1 (http backend) but can connect to service2 (https backend)
I can understand why the switch does this, but my question is:
Can you set the backend-protocol per rule in an nginx-ingress ?
It's not possible to set backend protocol per rule in a single ingress. To achieve what you want you can create two different ingress one for service1 and another one for service2 and annotate the ingress for service1 with http and ingress for service2 with https.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I m bit new to Kubernetes and was going over "Ingress". after reading the k8 docs and googling , I summarised the following. Can somebody confirm/correct my understanding:
To understand Ingress, I divided it into 2 sections :
Cloud Infrastructure:
In this, there is in-built ingress controller which runs in the master node(but we can't see it when running kubectl get pods -n all). To configure , first create ur Deployment Pods and expose them through services (Service Type=NodePort must). Also, make sure to create default-backend-service. Then create ingress rules as follows:
kind: Ingress
metadata:
name: app-ingress
spec:
backend:
serviceName: default-svc
servicePort: 80
rules:
- host: api.foo.com
http:
paths:
- path: /v1/
backend:
serviceName: api-svc-v1
servicePort: 80
- path: /v2/
backend:
serviceName: api-svc-v2
servicePort: 80
Once you apply the ingress rules to the API server, ingress controller listens to the API and updates the /etc/nginx.conf. Also, after few mins, nginx controller creates an external Load balancer with an IP(lets say LB_IP)
now to test: from your browser, enter http://api.foo.com/(or http://) which will redirect to default service and http://api.foo.com/v1(or http:///v1) which will redirect it service api-svc-v1
Question:
how can I see /etc/nginx files since the ingress controller pod is not visible.
During the time, ingress rules are applied and an external LB_IP is getting created, does all the DNS servers of all registrars are updated with DNS entry "api.foo.com "
In-house kubernetes deployment using kubeadm:
In this, there is no external ingress controller and you need to install it manually. To configure, first create ur deployment pods and expose them through service (make sure that service Type=NodePort). Also, make sure to create default-backend-service.Create Ingress controller using the below yaml file:
spec:
containers:
-
args:
- /nginx-ingress-controller
- "--default-backend-service=\\$(POD_NAMESPACE)/default-backend"
image: "gcr.io/google_containers/nginx-ingress-controller:0.8.3"
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 5
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
we can see the ingress controller running in node3 using "kubectl get pods" and login to this pod, we can see /etc/nginx/nginx.conf
Now create the ingress rules as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
name: app-ingress
spec:
rules:
- host: testabc.com
http:
paths:
- backend:
serviceName: appsvc1
servicePort: 80
path: /app1
- backend:
serviceName: appsvc2
servicePort: 80
path: /app2
Once you apply the ingress rules to the API server, ingress controller listens to the API and updates the /etc/nginx.conf. But note that there is no Load balancer created . Instead when you do "kubectl get ingress", you get Host=testabc.com and IP=127.0.0.1. Now to expose this ingress-controller outside, I need to create a service with type=NodePort or type=Loadbalancer
kind: Service
metadata:
name: nginx-ingress
spec:
type: NodePort
ports:
- port: 80
nodePort: 33200
name: http
selector:
app: nginx-ingress-lb
After this, we will get an external IP(if type=Loadbalancer)
now to test: from your browser, enter http://testabc.com/(or http://) which will redirect to default service and http://testabc.com/v1(or http:///v1) which will redirect it service api-svc-v1
Question:
3.if the ingress-controller pod is running in node3, how it can listen to ingress api which is running in node1
Q.1 How can I see /etc/nginx files since the ingress controller pod is not visible?
Answer: Whenever you install an Nginx Ingress via Helm, it creates an entire Deployment for that Ingress. This deployment resides in Kube-System Namespace. All the pods bind to this deployment also resides in Kube-System Namespace. So, if you want to get attach to the container of this pod you need to get into that namespace and attache to it. Then you will be able to see the Pods in that namespace.
Here You can see the Namespace is Kube-System & the 1st deployment in the list is for Nginx Ingress.
Q.3 If the ingress-controller pod is running in node3, how it can listen to ingress api which is running in node1?
Answer: Entire Communication between the pods & nodes take place using the Services in Kubernetes. Service Exposes the pod to each & every node using a NodePort as well as Internal Endpoint & External Endpoint. This service is then attached to the deployment (ingress-deployment in this case) via Labels and is known through out the cluster for communication. I hope you know how to attach a service to a deployment. So even if the controller pod is running on node3, service knows this and transfer the incoming traffic to the pod.
Endpoints exposed to entire cluster, right above the curser.