kubernetes ingress controller clarification [closed] - nginx

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I m bit new to Kubernetes and was going over "Ingress". after reading the k8 docs and googling , I summarised the following. Can somebody confirm/correct my understanding:
To understand Ingress, I divided it into 2 sections :
Cloud Infrastructure:
In this, there is in-built ingress controller which runs in the master node(but we can't see it when running kubectl get pods -n all). To configure , first create ur Deployment Pods and expose them through services (Service Type=NodePort must). Also, make sure to create default-backend-service. Then create ingress rules as follows:
kind: Ingress
metadata:
name: app-ingress
spec:
backend:
serviceName: default-svc
servicePort: 80
rules:
- host: api.foo.com
http:
paths:
- path: /v1/
backend:
serviceName: api-svc-v1
servicePort: 80
- path: /v2/
backend:
serviceName: api-svc-v2
servicePort: 80
Once you apply the ingress rules to the API server, ingress controller listens to the API and updates the /etc/nginx.conf. Also, after few mins, nginx controller creates an external Load balancer with an IP(lets say LB_IP)
now to test: from your browser, enter http://api.foo.com/(or http://) which will redirect to default service and http://api.foo.com/v1(or http:///v1) which will redirect it service api-svc-v1
Question:
how can I see /etc/nginx files since the ingress controller pod is not visible.
During the time, ingress rules are applied and an external LB_IP is getting created, does all the DNS servers of all registrars are updated with DNS entry "api.foo.com "
In-house kubernetes deployment using kubeadm:
In this, there is no external ingress controller and you need to install it manually. To configure, first create ur deployment pods and expose them through service (make sure that service Type=NodePort). Also, make sure to create default-backend-service.Create Ingress controller using the below yaml file:
spec:
containers:
-
args:
- /nginx-ingress-controller
- "--default-backend-service=\\$(POD_NAMESPACE)/default-backend"
image: "gcr.io/google_containers/nginx-ingress-controller:0.8.3"
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 5
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
we can see the ingress controller running in node3 using "kubectl get pods" and login to this pod, we can see /etc/nginx/nginx.conf
Now create the ingress rules as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
name: app-ingress
spec:
rules:
- host: testabc.com
http:
paths:
- backend:
serviceName: appsvc1
servicePort: 80
path: /app1
- backend:
serviceName: appsvc2
servicePort: 80
path: /app2
Once you apply the ingress rules to the API server, ingress controller listens to the API and updates the /etc/nginx.conf. But note that there is no Load balancer created . Instead when you do "kubectl get ingress", you get Host=testabc.com and IP=127.0.0.1. Now to expose this ingress-controller outside, I need to create a service with type=NodePort or type=Loadbalancer
kind: Service
metadata:
name: nginx-ingress
spec:
type: NodePort
ports:
- port: 80
nodePort: 33200
name: http
selector:
app: nginx-ingress-lb
After this, we will get an external IP(if type=Loadbalancer)
now to test: from your browser, enter http://testabc.com/(or http://) which will redirect to default service and http://testabc.com/v1(or http:///v1) which will redirect it service api-svc-v1
Question:
3.if the ingress-controller pod is running in node3, how it can listen to ingress api which is running in node1

Q.1 How can I see /etc/nginx files since the ingress controller pod is not visible?
Answer: Whenever you install an Nginx Ingress via Helm, it creates an entire Deployment for that Ingress. This deployment resides in Kube-System Namespace. All the pods bind to this deployment also resides in Kube-System Namespace. So, if you want to get attach to the container of this pod you need to get into that namespace and attache to it. Then you will be able to see the Pods in that namespace.
Here You can see the Namespace is Kube-System & the 1st deployment in the list is for Nginx Ingress.
Q.3 If the ingress-controller pod is running in node3, how it can listen to ingress api which is running in node1?
Answer: Entire Communication between the pods & nodes take place using the Services in Kubernetes. Service Exposes the pod to each & every node using a NodePort as well as Internal Endpoint & External Endpoint. This service is then attached to the deployment (ingress-deployment in this case) via Labels and is known through out the cluster for communication. I hope you know how to attach a service to a deployment. So even if the controller pod is running on node3, service knows this and transfer the incoming traffic to the pod.
Endpoints exposed to entire cluster, right above the curser.

Related

Nginx ingress controller - Return 200 on root

We are using the Nginx ingress controller in Azure Kubernetes Service to direct traffic to a number of .NET Apis that we run there.
All calls to this are routed via the Azure Application Gateway for WAF and DNS reasons.
Application gateway has "health probes" that hit your backend pools (which point to the external IP of our nginx ingress controller service) performing a GET at the root.
Previously we had services for each site, setup as LoadBalancer, which gave each site their own external IP address, and we pointed the backend pool to that and it worked fine.
But now we are trying to do things more securely and route all calls via the Ingress Controller... but now we have one backend pool with the ingress controller's IP address, and as there's nothing there the health probe comes back unhealthy, and the site doesn't work.
I have setup the Ingress for the site so that if a request hits the backend pool with the domain (below) it will work, but the health probe doesn't seem to do that. As it is just doing a GET on the IP address of the controller.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: "api.mydomain.com"
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: my-api-service
port:
number: 443
I installed the controller using the Helm chart, and I just want to be able to set it so that a GET request to that controller will just return 200 and any other request will be directed appropriately. I had tried the below for our ingress, to route a call to the root to the api (which has a 200 response at its root) but I don't think that was the right place for it, and it didn't work. It might have to be part of the Helm command to setup the Ingress controller itself.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-api-service
port:
number: 443
- host: "api.mydomain.com"
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: my-api-service
port:
number: 443
The nginx ingress controller exposes a default backend /healthz endpoint which returns 200 OK. You can make your App gateway health probe to point to this endpoint.
Also, instead of using App gateway + NGINX ingress controller which require 2 hops before reaching your service, consider using Application Gateway ingress controller (AGIC).

Kubernetes Nginx Ingress partial ssl termination

I'd like to split incoming traffic in Kubernetes Nginx in the following way:
Client --> Nginx --> {Service A, Service B}
The problem I am facing is Service A is an internal service and does not support HTTPS therefore SSL should be terminated for Service A. On the other hand, Service B is an external service (hosted on example.com) and only works over HTTPS.
I cannot manage to get this work easily with Kubernetes Nginx. Here is what I have come with:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-proxy
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/upstream-vhost: example.com
spec:
tls:
- hosts:
- proxy.com
secretName: secret
rules:
- host: proxy.com
http:
paths:
- path: /api/v1/endpoint
backend:
serviceName: service-a
servicePort: 8080
- path: /
backend:
serviceName: service-b
servicePort: 443
kind: Service
apiVersion: v1
metadata:
name: service-b
namespace: default
spec:
type: ExternalName
externalName: service-b.external
ports:
- port: 443
I have got a route for service-b.external:443 to point to example.com.
This solution only works if service-b is over HTTPS, but in my case, I cannot change to HTTPS for this service because of some other internal dependencies.
My problem is the backend-protocol annotation works for the whole kind and I cannot define it per path.
P.S: I am using AWS provider
Following the suggested solution and question from comments.
Yes, like mentioned below it is possible to have two ingress items. In your case
only one should have backend-protocol in it.
According to nginx ingress documentation:
Basic usage - host based routing¶
ingress-nginx can be used for many use cases, inside various cloud provider and supports a lot of configurations. In this section you can find a common usage scenario where a single load balancer powered by ingress-nginx will route traffic to 2 different HTTP backend services based on the host name.
First of all follow the instructions to install ingress-nginx. Then imagine that you need to expose 2 HTTP services already installed: myServiceA, myServiceB. Let's say that you want to expose the first at myServiceA.foo.org and the second at myServiceB.foo.org. One possible solution is to create two ingress resources:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-myservicea
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: myservicea.foo.org
http:
paths:
- path: /
backend:
serviceName: myservicea
servicePort: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-myserviceb
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: myserviceb.foo.org
http:
paths:
- path: /
backend:
serviceName: myserviceb
servicePort: 80
When you apply this yaml, 2 ingress resources will be created managed by the ingress-nginx instance. Nginx is configured to automatically discover all ingress with the kubernetes.io/ingress.class: "nginx" annotation. Please note that the ingress resource should be placed inside the same namespace of the backend resource.
On many cloud providers ingress-nginx will also create the corresponding Load Balancer resource. All you have to do is get the external IP and add a DNS A record inside your DNS provider that point myServiceA.foo.org and myServiceB.foo.org to the nginx external IP. Get the external IP by running:
kubectl get services -n ingress-nginx
It is also possible to have separate nginx classes as mentioned here.

Multiple docker apps running nginx at multiple different subpaths

I'm attempting to run several Docker apps in a GKE instance, with a load balancer setup exposing them. Each app comprises a simple node.js app with nginx to serve the site; a simple nginx config exposes the apps with a location block responding to /. This works well locally when developing since I can run each pod on a separate port, and access them simply at 127.0.0.1:8080 or similar.
The problem I'm encountering is that when using the GCP load balancer, whilst I can easily route traffic to the Kubernetes services such that https://example.com/ maps to my foo service/pod and https://example.com/bar goes to my bar service, the bar pod responds with a 404 since the path, /bar doesn't match the path specified in the location block.
The number of these pods will scale a lot so I do not wish to manually know ahead of time what path each pod will be under, nor do I wish to embody this in my git repo.
Is there a way I can dynamically define the path the location block matches, for example via an environment variable, such that I could define it as part of the Helm charts I use to deploy these services? Alternatively is it possible to match all paths? Is that a viable solution, or just asking for problems?
Thanks for your help.
Simply use ingress. It will allow you to map different paths to different backend Services. It is very well explained both in GCP docs as well as in the official kubernetes documentation.
Typical ingress object definition may look as follows:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
backend:
serviceName: my-products
servicePort: 60001
rules:
- http:
paths:
- path: /
backend:
serviceName: my-products
servicePort: 60000
- path: /discounted
backend:
serviceName: my-discounted-products
servicePort: 80
- path: /special
backend:
serviceName: special-offers
servicePort: 80
- path: /news
backend:
serviceName: news
servicePort: 80
When you apply your ingress definition on GKE, load balancer is created automatically. Note that all Services may use same, standard http port and you don't have to use any custom ports.
You may want to specify a default backend, present in the above example (backend section right under spec), but it's optional. It will ensure that:
Any requests that don't match the paths in the rules field are sent to
the Service and port specified in the backend field. For example, in
the following Ingress, any requests that don't match / or /discounted
are sent to a Service named my-products on port 60001.
The only problem that you may encounter when using default ingress controller available on GKE is that for the time being it doesn't support rewrites.
If your nginx pods expose app content only on "/" path, no support for rewrites shouldn't be a limitation at all and as far as I understand, this applies in your case:
Each app comprises a simple node.js app with nginx to serve the site;
a simple nginx config exposes the apps with a location block
responding to /
However if you decide at some point that you need mentioned rewrites because e.g. one of your apps isn't exposed under / but rather /bar within the Pod you may decide to deploy nginx ingress controller which can be also done pretty easily on GKE.
So you will only need it in the following scenario: user accesses the ingress IP followed by /foo -> request is not only redirected to the specific backend Service that exposes your nginx Pod, but also the original path (/foo) needs to be rewritten to the new path (/bar) under which the application is exposed within the Pod
UPDATE:
Thank you for your reply. The above ingress configuration is very
similar to what I've already configured forwarding /foo and /bar to
different pods. The issue is that the path gets forwarded, and (after
doing some more research on the issue) I believe I need to rewrite the
URL that's sent to the pod, since the location / { ... } block in my
nginx config won't match against the received path of /foo or /bar. –
aodj Aug 14 at 9:17
Well, you're right. The original access path e.g. /foo indeed gets forwarded to the target Pod. So choosing /foo path apart from leading you to the respective backend defined in the ingress resource implicates that the target nginx server running in a Pod must serve its content also under /foo path.
I verified it with GKE ingress and can confirm by checking Pod logs that an http request sent to the nginx Pod thorough the /foo path, indeed comes to the Pod as request for /usr/share/nginx/html/foo while it serves its content under /, not /foo from /usr/share/nginx/html. So requesting for something that don't exist on the target server leads inevitably to 404 Error.
As I mentioned before, default ingress controller available on GKE doesn't support rewrites so if you want to use it for some reason, reconfiguring your target nginx servers seems the only solution to make it work.
Fortunatelly we have another option which is nginx ingress controller. It supports rewrites so it can easily solve our problem. We can deploy it on our GKE cluster by running two following commands:
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/cloud/deploy.yaml
Yes, it's really that simple! You can take a closer look at the installation process in official docs.
Then we can apply the following ingress resource definition:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: nginx-deployment-1
servicePort: 80
path: /foo(/|$)(.*)
- backend:
serviceName: nginx-deployment-2
servicePort: 80
path: /bar(/|$)(.*)
Note that we used kubernetes.io/ingress.class: "nginx" annotation to select our newly deployed nginx-ingress controller to handle this ingress resource rather than the default GKE-ingress controller.
Rewrites that were used will make sure that the original access path gets rewritten before reaching the target nginx Pod. So it's perfectly fine that both sets of Pods exposed by nginx-deployment-1 and nginx-deployment-2 Services serve their contents under "/".
If you want to quickly check how it works on your own, you can use the following Deployments:
nginx-deployment-1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-1
labels:
app: nginx-1
spec:
replicas: 3
selector:
matchLabels:
app: nginx-1
template:
metadata:
labels:
app: nginx-1
spec:
initContainers:
- name: init-myservice
image: nginx:1.14.2
command: ['sh', '-c', "echo DEPLOYMENT-1 > /usr/share/nginx/html/index.html"]
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-volume
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
nginx-deployment-2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-2
labels:
app: nginx-2
spec:
replicas: 3
selector:
matchLabels:
app: nginx-2
template:
metadata:
labels:
app: nginx-2
spec:
initContainers:
- name: init-myservice
image: nginx:1.14.2
command: ['sh', '-c', "echo DEPLOYMENT-2 > /usr/share/nginx/html/index.html"]
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-volume
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
And expose them via Services by running:
kubectl expose deployment nginx-deployment-1 --type NodePort --target-port 80 --port 80
kubectl expose deployment nginx-deployment-2 --type NodePort --target-port 80 --port 80
You may even omit --type NodePort as nginx-ingress controller accepts also ClusterIP Services.

Can you set backend-protocol per rule in k8s nginx ingress?

I have a kubernetes cluster setup with two services set up.
Service1 links to Deployment1 and Service2 links to Deployment2.
Deployment1 serves pods which can only be connected to using http.
Deployment2 serves pods which can only be connected to using https.
Using kubectl port-forward and exec'ing into pods I know the services and deployments are responding as they should, connectivity internally between the services is working fine.
I have an nginx ingress setup to allow external connections to both services. The services should only be connected to using https and any incoming connections that are http need to be redirected to https. Here is the ingress setup:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: master-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-production"
spec:
tls:
- secretName: tls-secret-one
hosts:
- service1.domain.com
- service2.domain.com
rules:
- host: "service1.domain.com"
http:
paths:
- path: /
backend:
serviceName: service1
servicePort: 60001
- host: "service2.domain.com"
http:
paths:
- path: /
backend:
serviceName: service2
servicePort: 60002
Here is the problem. With this yaml I can connect to service1 (http backend) with no issues but connecting to service2 (https backend) results in a 502 Bad Gateway.
If I add the annotation ' nginx.ingress.kubernetes.io/backend-protocol: "https" ' the connectivity switches. I can no longer connect to service1 (http backend) but can connect to service2 (https backend)
I can understand why the switch does this, but my question is:
Can you set the backend-protocol per rule in an nginx-ingress ?
It's not possible to set backend protocol per rule in a single ingress. To achieve what you want you can create two different ingress one for service1 and another one for service2 and annotate the ingress for service1 with http and ingress for service2 with https.

How do I host multiple services using subdirectories with nginx-ingress?

Problem
I would like to host multiple services on a single domain name under different paths. The problem is that I'm unable to get request path rewriting working using nginx-ingress.
What I've tried
I've installed nginx-ingress using these instructions:
helm install stable/nginx-ingress --name nginx-ingress --set controller.publishService.enabled=true
CHART APP VERSION
nginx-ingress-0.3.7 1.5.7
The example works great with hostname based backends:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: first.testdomain.com
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
However, I can't get path rewriting to work. This version redirects requests to the hello-kubernetes-first service, but doesn't do the path rewrite so I get a 404 error from that service because it's looking for the /foo directory within that service (which doesn't exist).
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: first.testdomain.com
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
path: /foo
I've also tried this example for paths / rewriting:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: first.testdomain.com
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
path: /foo(/|$)(.*)
But the requests aren't even directed to the hello-kubernetes-first service.
It appears that my rewrite configuration isn't making it to the /etc/nginx/nginx.conf file. When I run the following, I get no results:
kubectl exec nginx-ingress-nginx-ingress-XXXXXXXXX-XXXXX cat /etc/nginx/nginx.conf | grep rewrite
How do I get the path rewriting to work?
Additional information:
kubectl / kubernetes version: v1.14.8
Hosting on Azure Kubernetes Service (AKS)
This is not likely to be an issue with AKS, as the components you use are working on top of Kubernetes layer. However, if you want to be sure you can deploy this on top of minikube locally and see if the problem persists.
There are also few other things to consider:
There is a detailed guide about creating ingress controller on AKS. The guide is up to date and confirmed to be working fine.
This article shows you how to deploy the NGINX ingress controller in
an Azure Kubernetes Service (AKS) cluster. The cert-manager project is
used to automatically generate and configure Let's Encrypt
certificates. Finally, two applications are run in the AKS cluster,
each of which is accessible over a single IP address.
You may also want to use alternative like Traefik:
Traefik is a modern HTTP reverse proxy and load balancer made to
deploy microservices with ease.
Remember that:
Operators will typically wish to install this component into the
kube-system namespace where that namespace's default service account
will ensure adequate privileges to watch Ingress resources
cluster-wide.
Please let me know if that helped.

Resources