How to get HTTPS on AKS without ingress - nginx

My problem is simple. I have an AKS deployment with a LoadBalancer service that needs to use HTTPS with a certificate.
How do I do this?
Everything I'm seeing online involves Ingress and nginx-ingress in particular.
But my deployment is not a website, it's a Dropwizard service with a REST API on one port and an admin service on another port. I don't want to map the ports to a path on port 80, I want to keep the ports as is. Why is HTTPS tied to ingress?
I just want HTTPS with a certificate and nothing more changed, is there a simple solution to this?

A sidecar container with nginx with the correct certificates (possible loaded off a Secret or a ConfigMap) will do the job without ingress. This seems to be a good example, using nginx-ssl-proxy container.

Yes, that's right as of this writing an Ingress will currently work either on port 80 or port 443, potentially it can be extended to use any port because nginx, Traefik, haproxy, etc can all listen on different ports.
So you are down to either a LoadBalancer or a NodePort type of service. Type LoadBalancer will not work directly with TLS since the Azure load balancers are layer 4. So you will have to use Application Gateway and it's preferred to use an internal load balancer for security reasons.
Since you are using Azure you can run something like this (assuming that your K8s cluster is configured the right way to use the Azure cloud provider, either the --cloud-provider option or the cloud-controller-manager):
$ cat <<EOF
apiVersion: v1
kind: Service
metadata:
name: your-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: <your-port>
selector:
app: your-app
EOF | kubectl apply -f -
and that will create an Azure load balancer on the port you like for your service. Behind the scenes, the load balancer will point to a port on the nodes and within the nodes, there will be firewall rules that will route to your container. Then you can configure Application Gateway. Here's a good article describing it but using port 80, you will have to change it use port 443 and configuring the TLS certs, and the Application Gateway also supports end to end TLS in case you want to terminate TLS directly on your app too.
The other option is NodePort, and you can run something like this:
$ kubectl expose deployment <deployment-name> --type=NodePort
Then Kubernetes will pick a random port on all your nodes where you can send traffic to your service listening on <your-port>. So, in this case, you will have to manually create a load balancer with TLS or a traffic source that listens on TLS <your-port> and forwards it to a NodePort on all your nodes, this load balancer can be anything like haproxy, nginx, Traefik or something else that supports terminating TLS. And you can also use the Application Gateway to forward directly to your node ports, in other words, define a listener that listens on the NodePort of your cluster.

Related

Kubernetes Nginx Ingress to pod communication over https

I am doing some research on how to implement https secure connection between Nginx Ingress -> backend services. So far I have SSL setup in Nginx Ingress controller that uses Lets Encrypt cert manager to rotate certificate using http-01 challenge.
Here is my scenario:
Client from internet -> 2. Load balancer -> 3. Ingress Controller (that terminates TLS traffic) -> 4. Service (port 80) -> 5. Pod (port 80).
So my question is how can I secure communication between ingress controller and pod so that traffic is encrypted end to end? Do I need my own certificate authority to do that? If so, are there any open source solution that can handle certificate management just like Cert manager?
1. Nginx ingress controller + DAPR
I am not sure I can post here youtube urls(at least I have never seen anyone doing that) but.. I think this is 100% exactly what you want. Your scenario is discussed in 1st topic, you need watch only it. Plus as a benefit - you will see step-by-step installation there. Personally I found that video very helpful
Secure Ingress pods communication
2. You can achieve that with Istio itself.
Istio By Example!:Secure Ingress
3. Istio + Calico network policy for Istio
Enforce network policy for Istio
The Calico support for Istio service mesh has the following benefits:
-Pod traffic controls
Lets you restrict ingress traffic inside and outside pods and mitigate common threats to Istio-enabled apps.
-Supports security goals
Enables adoption of a zero trust network model for security, including traffic encryption, multiple enforcement points, and multiple identity criteria for authentication.
Replace let`s encrypt with aws certificates because they are free.
Validate your domains you use inside you cluster and then edit the main service of your ingress controller.Use this annotations if you like if you use aws.
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:XXXXXXXX"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy:"ELBSecurityPolicy-TLS-1-2-2017-01"

Unable to get real remote IP in AKS with advanced networking

We have two AKS clusters for different environments. Both use a Nginx server as a custom ingress. By that I mean that it acts like an ingress, but it is just a normal Nginx deployment behind a service. There are several good reasons for that setup, the main one being that ingress did not exist in AKS when we started.
The services are defined like this:
apiVersion: v1
kind: Service
metadata:
name: <our name>
namespace: <our namespace>
spec:
ports:
- port: 443
targetPort: 443
selector:
app: <our app>
loadBalancerIP: <our ip>
type: LoadBalancer
externalTrafficPolicy: Local
We have configured Nginx with the real ip module like this:
real_ip_header X-Original-Forwarded-For;
set_real_ip_from 10.0.0.0/8; # or whatever ip is correct
One environment uses the old basic networking, networkPlugin=kubenet. There Nginx logs the real client IP addresses in the log and can use them for access controls. The other uses advanced networking, networkPlugin=azure. There Nginx logs the IP address of one of the nodes, which is useless. Both the X-Original-Forwarded-For and the standard X-Forwarded-For headers are empty and of course the source IP is from the node, not from the client.
Is there a way around this? If at all possible we would like to avoid defining a "real" ingress as our own Nginx server contains custom configuration that would be hard to duplicate in such a setup, plus it is not clear that a standard ingress would help either?
Microsoft should have fixed this by now for real ingresses. However, apparently the fix doesn't cover our case where Nginx runs as a pod behind a service with advanced networking. We were told to use the workaround posted by denniszielke in https://github.com/Azure/AKS/issues/607 where the iptables for all nodes are updated regularly. Quite dirty in my view, but it works.
We still have the service defined as above with "externalTrafficPolicy: Local" and we have installed the ConfigMap and DaemonSet from the link. I changed the script to reduce logging a bit and moved both to another namespace.

NGINX loadbalancing on Kubernetes

I have some services running in Kubernetes. I need an NGINX in front of them, to redirect traffic according to the URLs, handle SSL encryption and load balancing.
There is a working nginx.conf for that scenario. What I´m missing is the right way to set up the architecture on gcloud.
Is it correct to launch a StatefulSet with nginx and have a Loadbalancing Service expose NGINX? Do I understand it right, that gcloud LB would pass the configured Ports ( f.e. 80 + 443) to my NGINX service, where I can handle the rest and forward the traffic to the backend services?
You don't really need a StatefulSet, a Deployment will do since nginx is already being fronted by a gcloud TCP load balancer, if for any reason one of your nginx pods is down the gcloud load balancer will not forward traffic to it. Since you already have a gcloud load balancer you will have to use a NodePort Service type and you will have to point your gcloud load balancer to all the nodes on your K8s cluster on that specific port.
Note that your nginx.conf will have to know how to route to all the services internally in your K8s cluster. I recommend you set up an nginx ingress controller, which will basically manage the nginx.conf for you through an Ingress resource and you can also expose it as a LoadBalancer Service type.

Google cloud CDN, storage and container engine issue with backend-service

I have a specific use case that I can not seem to solve.
A typical gcloud setup:
A K8S cluster
A gcloud storage bucket
A gcloud loadbalancer
I managed to get my domain https://cdn.foobar.com/uploads/ to points to a google storage backend without any issue: I can access files. Its the backend service one that fails.
I would like the CDN to act as a cache, when a HTTP request hits it such as https://cdn.foobar.com/assets/x.jpg, if it does not have a copy of the asset it should query an other domain https://foobar.com/assets/x.jpg.
I understood that this what was load balancers backend-service were for. (Right?)
The backend-service is pointing to the instance group of the k8s cluster and requires a port. I guessed that I needed to allow the firewall to expose the Nodeport of my web application service for the loadbalancer to be able to query it.
Cloud CDN
Load balancing
Failing health-checks.
The backend service is pointing to the instance group of the k8s cluster and requires some ports (default 80?) 80 failed. I guessed that I needed to allow the firewall to expose the 32231 Nodeport of my web application service for the loadbalancer to be able to query it. That still failed with a 502.
?> kubectl describe svc
Name: backoffice-service
Namespace: default
Labels: app=backoffice
Selector: app=backoffice
Type: NodePort
IP: 10.7.xxx.xxx
Port: http 80/TCP
NodePort: http 32231/TCP
Endpoints: 10.4.x.x:8500,10.4.x.x:8500
Session Affinity: None
No events.
I ran out of ideas at this point.
Any hints int the right direction would be much appreciated.
When deploying your service as type 'NodePort', you are exposing the service on each Node's IP, but the service is not reachable to the exterior, so you need to expose your service as 'LoadBalancer'
Since you're looking to use an HTTP(s) Load Balancer, I'll recommend using a Kubernetes Ingress resource. This resource will be in charge of configuring the HTTP(s) load balancer and the required ports that your service is using, as well as the health checks on the specified port.
Since you're securing your application, you will need to configure a secret object for securing the Ingress.
This example will help you getting started on an Ingress with TLS termination.

How can I route traffic through a custom proxy to my kubernetes container?

So we've got a big site with 1 nginx config that handles everything! This includes SSL.
At the moment, the config is setup to route all traffic for subdomain.ourdomain.com to our exposed kubernetes service.
When I visit subdomain.ourdomain.com, it returns a 502 Bad Gateway. I've triple checked that the service inside my kubernetes pod is running properly. I'm pretty certain there is something wrong with the kubernetes config I'm using.
So what I've done:
Created kubernetes service
Exposed it using type LoadBalancer
Added the correct routing to our nginx config for our subdomain
This is what the kubectl get services returns:
users <cluster_ip> <external_ip> 80/TCP 12m
This is what kubectl get endpoints returns:
kubernetes <kub_ip>:443 48d
redis-master 10.0.1.5:6379 48d
redis-slave 10.0.0.5:6379 48d
users 10.0.2.7:80 3m
All I want to do is route all traffic through our nginx configuration to our kubernetes service?
We tried routing all traffic to our kubernetes container cluster IP, but this didn't work.

Resources