I already had NGINX handling my reverse-proxy and load balancing for bare-metals and VMs, wonder if I can use the same instance for my Kubernetes cluster exposing services in load-balancer mode. If so, could I use it for both L4 and L7?
You can't use it as type LoadBalancer because there's no cloud-provider API to handle an external Nginx instance. You can do a couple of things I can think of:
Create Kubernetes Service exposed on a NodePort. So your architecture will look like this:
External NGINX -> Kubernetes NodePort Service -> Pods
Create a Kubernetes Ingress managed by an ingress controller. The most popular happens to be Nginx. So your architecture will look something like this:
External NGINX -> Kubernetes Service (has to be NodePort) -> Ingress (NGINX) -> Backend Service -> Pods
Related
I have deployed nginx and it created the Internal TCP Network Loadbalancer, which is a Layer4 LB , on GKE. Application works as expected.
However if I want to use a GKE's Layer7 HTTP(S) LoadBalancer , is there a way through nginx ? I know there are some annotations for AWS , but not sure about GKE.
We tried creating a HTTP(S) LoadBalancer using GKE Ingress . It created but there are some issues with it and we are unable to use application. So can we use nginx controller to create a L7 Loadbalancer?
I have some services running in Kubernetes. I need an NGINX in front of them, to redirect traffic according to the URLs, handle SSL encryption and load balancing.
There is a working nginx.conf for that scenario. What I´m missing is the right way to set up the architecture on gcloud.
Is it correct to launch a StatefulSet with nginx and have a Loadbalancing Service expose NGINX? Do I understand it right, that gcloud LB would pass the configured Ports ( f.e. 80 + 443) to my NGINX service, where I can handle the rest and forward the traffic to the backend services?
You don't really need a StatefulSet, a Deployment will do since nginx is already being fronted by a gcloud TCP load balancer, if for any reason one of your nginx pods is down the gcloud load balancer will not forward traffic to it. Since you already have a gcloud load balancer you will have to use a NodePort Service type and you will have to point your gcloud load balancer to all the nodes on your K8s cluster on that specific port.
Note that your nginx.conf will have to know how to route to all the services internally in your K8s cluster. I recommend you set up an nginx ingress controller, which will basically manage the nginx.conf for you through an Ingress resource and you can also expose it as a LoadBalancer Service type.
My problem is simple. I have an AKS deployment with a LoadBalancer service that needs to use HTTPS with a certificate.
How do I do this?
Everything I'm seeing online involves Ingress and nginx-ingress in particular.
But my deployment is not a website, it's a Dropwizard service with a REST API on one port and an admin service on another port. I don't want to map the ports to a path on port 80, I want to keep the ports as is. Why is HTTPS tied to ingress?
I just want HTTPS with a certificate and nothing more changed, is there a simple solution to this?
A sidecar container with nginx with the correct certificates (possible loaded off a Secret or a ConfigMap) will do the job without ingress. This seems to be a good example, using nginx-ssl-proxy container.
Yes, that's right as of this writing an Ingress will currently work either on port 80 or port 443, potentially it can be extended to use any port because nginx, Traefik, haproxy, etc can all listen on different ports.
So you are down to either a LoadBalancer or a NodePort type of service. Type LoadBalancer will not work directly with TLS since the Azure load balancers are layer 4. So you will have to use Application Gateway and it's preferred to use an internal load balancer for security reasons.
Since you are using Azure you can run something like this (assuming that your K8s cluster is configured the right way to use the Azure cloud provider, either the --cloud-provider option or the cloud-controller-manager):
$ cat <<EOF
apiVersion: v1
kind: Service
metadata:
name: your-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: <your-port>
selector:
app: your-app
EOF | kubectl apply -f -
and that will create an Azure load balancer on the port you like for your service. Behind the scenes, the load balancer will point to a port on the nodes and within the nodes, there will be firewall rules that will route to your container. Then you can configure Application Gateway. Here's a good article describing it but using port 80, you will have to change it use port 443 and configuring the TLS certs, and the Application Gateway also supports end to end TLS in case you want to terminate TLS directly on your app too.
The other option is NodePort, and you can run something like this:
$ kubectl expose deployment <deployment-name> --type=NodePort
Then Kubernetes will pick a random port on all your nodes where you can send traffic to your service listening on <your-port>. So, in this case, you will have to manually create a load balancer with TLS or a traffic source that listens on TLS <your-port> and forwards it to a NodePort on all your nodes, this load balancer can be anything like haproxy, nginx, Traefik or something else that supports terminating TLS. And you can also use the Application Gateway to forward directly to your node ports, in other words, define a listener that listens on the NodePort of your cluster.
I exposed a service with a static IP and an Ingress through an nginx controller as one of the examples of the kubernetes/ingress repository. I have a second LoadBalancer service, that is not managed by any Ingress resource that is no longer properly exposed after the adding the new resources for the first service (I do not understand why this is the case).
I tried to add a second Ingress and LoadBalancer service to assign the second static IP, but I cant get it to work.
How would I go about exposing the second service, preferably with an Ingress? Do I need to add a second Ingress resource or do I have to reconfigure the one I already have?
Using a Service with type: LoadBalancer and using an Ingress are usually mutually exclusive ways to expose your application.
When you create a Service with type: LoadBalancer, Kubernetes creates a LoadBalancer in your cloud account that has an IP, opens the ports on that LoadBalancer that match your Service, and then directs all traffic to that IP to the 1 Service. So if you have 2 Service objects, each with 'type: LoadBalancer' for 2 different Deployments, then you have 2 IPs as well (one for each Service).
The Ingress model is based on directing traffic through a single Ingress Controller which is running something like nginx. As the Ingress resources are added, the Ingress Controller reconfigures nginx to include the new Ingress details. In this case, there will be a Service for the Ingress Controller (e.g. nginx) that is type: LoadBalancer, but all of the services that the Ingress resources point to should be type: ClusterIP. Traffic for all the Ingress objects will flow through the same public IP of the LoadBalancer for the Ingress Controller Service to the Ingress Controller (e.g. nginx) Pods. The configuration details from the Ingress object (e.g. virtual host or port or route) will then determine which Service will get the traffic.
So we've got a big site with 1 nginx config that handles everything! This includes SSL.
At the moment, the config is setup to route all traffic for subdomain.ourdomain.com to our exposed kubernetes service.
When I visit subdomain.ourdomain.com, it returns a 502 Bad Gateway. I've triple checked that the service inside my kubernetes pod is running properly. I'm pretty certain there is something wrong with the kubernetes config I'm using.
So what I've done:
Created kubernetes service
Exposed it using type LoadBalancer
Added the correct routing to our nginx config for our subdomain
This is what the kubectl get services returns:
users <cluster_ip> <external_ip> 80/TCP 12m
This is what kubectl get endpoints returns:
kubernetes <kub_ip>:443 48d
redis-master 10.0.1.5:6379 48d
redis-slave 10.0.0.5:6379 48d
users 10.0.2.7:80 3m
All I want to do is route all traffic through our nginx configuration to our kubernetes service?
We tried routing all traffic to our kubernetes container cluster IP, but this didn't work.