Multiple pods for nginx ingress controller - nginx

We are using nginx ingress controller licence version for our application.
Our nginx is deployed in ESK as a single pod which is serving request as we need but now we wanted to overcome from failover to have 2 pods running for same ngixn ingress controller as replicaset 2, So my question is if we create 2 replicaset for nginx ingress controller then we have to buy 2 licence or 1 licence for a env is enough ?
making replicaset as 2 is working as expected but need to know how licence will play a role when we generate 2 pod for same nginx ingress controller

Related

Linkerd traffic split with Nginx Ingress Controller

I have deployed a Linkerd Service mesh and my Kubernetes cluster is configured with the Nginx ingress controller as a DaemonSet and all the ingresses are working fine also the Linkerd. Recently, I have added a traffic split functionality to run my blue/green setup I can reach through to these services with separate ingress resources. I have created an apex-web service as described here. If I reached you this service internally it perfectly working. I have created another ingress resources and I'm not able to test the blue/green functionality outside of my cluster. I'd like to mention that I have meshed (injected the Linkerd proxy) to all my Nginx pods but it is returning "503 Service Temporarily Unavailable" message from the Nginx.
I went through the documentation and I have created ingress following this, I can confirm that below annotations were added to the ingress resources.
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
but still no luck with the out side of the cluster.
I'm testing with the given emojivoto app and all the traffic split and the apex-web services are in this training repository.
I'm not quite sure what went wrong and how to fix this outside from the cluster. I'd really appreciate if anyone assist me to fix this Linkerd, Blue/Green issue.
I have raised this question in the Linkerd Slack channel and got this fixed with the wonderful support from the community. Seems Nginx doesn't like the service which doesn't have an endpoint. My configuration was correct and asked to change the service pointed in the traffic split to a service with an endpoint and it fixed the issue.
In a nutshell, my traffic split was configured with web-svc and web-svc-2 services. I have changed the traffic split spec.service to the same web-svc and it worked
Here is the traffic split configuration after the update.
apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
name: web-svc-ts
namespace: emojivoto
spec:
# The root service that clients use to connect to the destination application.
service: web-svc
# Services inside the namespace with their own selectors, endpoints and configuration.
backends:
- service: web-svc
# Identical to resources, 1 = 1000m
weight: 500m
- service: web-svc-2
weight: 500m
Kudos to the Linkerd team who supported me to fix this issue. It is working like a charm.
tl;dr: The nginx ingress requires a Service resource to have an Endpoint resource in order to be considered a valid destination for traffic. The architecture in the repo creates three Service resources, one of which acts as an apex and has no Endpoint resources because it has no selectors, so the nginx ingress won't send traffic to it, and the leaf services will not get traffic as a result.
The example in the repo follows the SMI Spec by defining a single apex service and two leaf services. The web-apex service does not have any endpoints, so nginx will not send traffic to it.
According to the SMI Spec services can be self-referential, which means that a service can be both an apex and a leaf service, so to use the nginx ingress with this example, you can modify the TrafficSplit definition to change the spec.service value from web-apex to web-svc:
apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
name: web-svc-ts
namespace: emojivoto
spec:
# The root service that clients use to connect to the destination application.
service: web-svc
# Services inside the namespace with their own selectors, endpoints and configuration.
backends:
- service: web-svc
# Identical to resources, 1 = 1000m
weight: 500m
- service: web-svc-2
weight: 500m

What is the role of an external Load Balancer if we are using nginx ingress controller?

I have deployed my application in a cluster of 3 nodes. Now to make this application externally accessible, I have followed this documentation and integrated nginx ingress controller.
Now when I checked my Google's Load Balancer console, I can see a new load balancer created and everything works fine. But the strange thing is I found two of my nodes are unhealthy and only one node is accepting connection. Then I found this discussion and understood that the only node running nginx ingress controller pod will be healthy for load balancer.
Now I feel hard to understand this data flow and the use of external load balancer here. We use external load balancer to balance the load to multiple machines. But with this configuration external load balancer will always forward traffic to the node with nginx ingress controller pod. If that is correct, what is the role of external load balance here?
You can have more than one replica of the Nginx ingress controller pods deployed across more than one kubernetes nodes for high availability purpose to reduce the possibility of downtime in case one kubernetes node is unavailable. The LoadBalancer will send the request to one of those nginx ingress Controller pods. From nginx ingress controller pods it will forwarded to any of the backend pods. The role of the external load balancer is to expose nginx ingress controller pods outside the cluster. Because NodePort is not recommended for usage in production and ClusterIP can not be used expose pods outside the cluster, hence LoadBalancer is the viable option.

Expose multiple backends with multiple IPs with Kubernetes Ingress resources

I exposed a service with a static IP and an Ingress through an nginx controller as one of the examples of the kubernetes/ingress repository. I have a second LoadBalancer service, that is not managed by any Ingress resource that is no longer properly exposed after the adding the new resources for the first service (I do not understand why this is the case).
I tried to add a second Ingress and LoadBalancer service to assign the second static IP, but I cant get it to work.
How would I go about exposing the second service, preferably with an Ingress? Do I need to add a second Ingress resource or do I have to reconfigure the one I already have?
Using a Service with type: LoadBalancer and using an Ingress are usually mutually exclusive ways to expose your application.
When you create a Service with type: LoadBalancer, Kubernetes creates a LoadBalancer in your cloud account that has an IP, opens the ports on that LoadBalancer that match your Service, and then directs all traffic to that IP to the 1 Service. So if you have 2 Service objects, each with 'type: LoadBalancer' for 2 different Deployments, then you have 2 IPs as well (one for each Service).
The Ingress model is based on directing traffic through a single Ingress Controller which is running something like nginx. As the Ingress resources are added, the Ingress Controller reconfigures nginx to include the new Ingress details. In this case, there will be a Service for the Ingress Controller (e.g. nginx) that is type: LoadBalancer, but all of the services that the Ingress resources point to should be type: ClusterIP. Traffic for all the Ingress objects will flow through the same public IP of the LoadBalancer for the Ingress Controller Service to the Ingress Controller (e.g. nginx) Pods. The configuration details from the Ingress object (e.g. virtual host or port or route) will then determine which Service will get the traffic.

How can I route traffic through a custom proxy to my kubernetes container?

So we've got a big site with 1 nginx config that handles everything! This includes SSL.
At the moment, the config is setup to route all traffic for subdomain.ourdomain.com to our exposed kubernetes service.
When I visit subdomain.ourdomain.com, it returns a 502 Bad Gateway. I've triple checked that the service inside my kubernetes pod is running properly. I'm pretty certain there is something wrong with the kubernetes config I'm using.
So what I've done:
Created kubernetes service
Exposed it using type LoadBalancer
Added the correct routing to our nginx config for our subdomain
This is what the kubectl get services returns:
users <cluster_ip> <external_ip> 80/TCP 12m
This is what kubectl get endpoints returns:
kubernetes <kub_ip>:443 48d
redis-master 10.0.1.5:6379 48d
redis-slave 10.0.0.5:6379 48d
users 10.0.2.7:80 3m
All I want to do is route all traffic through our nginx configuration to our kubernetes service?
We tried routing all traffic to our kubernetes container cluster IP, but this didn't work.

How can I deploy an ingress controller for my Kubernetes cluster

So I built my Kubernetes cluster on AWS using KOPS
I then deployed SocketCluster on my K8s cluster using Baasil which deploys 7 YAML files
My problem is: the scc-ingress isn't getting any IP or endpoint as I have not deployed any ingress controller.
According to ingress controller docs, I am recommended to deploy an nginx ingress controller
I need easy and explained steps to deploy the nginx ingress controller for my specific cluster.
To view the current status of my cluster in a nice GUI, see the screenshots below:
Deployments
Ingress
Pods
Replica Sets
Services
The answer is here https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.4.0.yaml
But obviously the scc-ingress file needed to be changed to have a host such as foo.bar.com
Also, need to generate a self-signed SSL using OpenSSL as per this link https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/tls
Finally, had to add a CNAME on Route53 from foo.bar.com to the dns of the ELB created

Resources