Encrypt traffic between traefik and ingress in kubernetes cluster - encryption

I'm looking for solution how to encrypt traffic between Traefik frontend to kubernetes Ingress resource serving multiple application. The examples on internet, e.g. [1], show how to enable TLS support in Traefik or even use Let's Encrypt for Traefik and Ingress controller [2]. The problem is that all of them pass HTTPs request from frontend to HTTP backend. What I'm looking for is end-to-end encryption between frontend and backends.
I enabled TLS in Traefik as shown in [1], I enabled TLS in Ingress as shown [3], but still all traffic goes from HTTPs to HTTP. And, I don't know who is responsible for this decision (where it should be configured) the Traefik or Ingress.
Any suggestions?
https://medium.com/#patrickeasters/using-traefik-with-tls-on-kubernetes-cb67fb43a948
https://blog.osones.com/en/kubernetes-ingress-controller-with-traefik-and-lets-encrypt.html
https://kubernetes.io/docs/concepts/services-networking/ingress/

Related

Differences between HTTP Web Server and Ingress?

I am learning the world of k8s and there is a lot of talk about ingress and ingress controllers. Conceptually it sounds identical to a web server which I will define as a service that proxies HTTP requests to web application servers. It can serve up certificates and do basic load balancing...
Whereas ingress: Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. Ingress may provide load balancing, SSL termination and name-based virtual hosting.
https://kubernetes.io/docs/concepts/services-networking/ingress/
They sound the same! So what exactly is the difference here? I can't be the only one confused by this right?
In general Web Server's is responsible for accepting and fulfilling requests from clients.
A web server‘s fundamental job is to accept and fulfill requests from clients for static content from a website (HTML pages, files, images, video, and so on). The client is almost always a browser or mobile application and the request takes the form of a Hypertext Transfer Protocol (HTTP) message, as does the web server’s response.
Lately you can find many web servers like Apache or Nginx.
Kubernetes Ingress is an API object. In IBM blog - What is Kubernetes Ingress and why is it useful?
Kubernetes Ingress is an API object that provides routing rules to manage external users' access to the services in a Kubernetes cluster, typically via HTTPS/HTTP. With Ingress, you can easily set up rules for routing traffic without creating a bunch of Load Balancers or exposing each service on the node. This makes it the best option to use in production environments.
Also in Kubernetes Ingress Docs you can find that Kubernetes Ingress needs Ingress Controller.
You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
There are many ingress controllers like Nginx, Ambassador, Apache, etc.
To sum up:
To use Ingress you need some Web Server as Ingress Controller.
Kubernetes Ingress is an Kubernetes object which helps user to configure Web Server (like Nginx) in Kubernetes Clusters.
As you pointed in documentation it allows you to configure some HTTP/HTTPS routing, traffic load balancing, terminate SSL / TLS, etc.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

Nginx Ingress Controller with Nginx Reverse Proxy

I am a bit confused with the architecture of load-balancing K8s traffic with Nginx ingress controller.
I learned that an ingress controller is supposed to configure the load-balancer you're using according to ingress configurations.
So if I want to use Nginx ingress controller and I have a Physical server that is running Nginx that stands in front of my network, how can I make the ingress controller configure it?
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect. Take a look at the example graph below:
Nginx Ingress Controller is using service of type LoadBalancer to get the traffic enter the controller and then to get rerouted to particular services.
I strongly suggest going through the official documentation in order to get a good understanding of the topic and see some examples of using it.
is the nginx ingress controller supposed to (or can) configure an
Nginx machine?
NGINX Ingress Controller works with both NGINX and NGINX Plus and supports the standard Ingress features - content-based routing and TLS/SSL termination.

Nginx Ingress, HTTPS, and internal HTTPs

So, here's what's up: We have HTTPs being handled externally by our nginx ingress on a kubernetes cluster. This is great and all, but it means any traffic between the proxy and its backing service is going over http. This is a bit of a flaw in our security coverage so we're trying to get that secondary internal traffic to travel over https as well.
Now, the ingress has a nginx.ingress.kubernetes.io/proxy-ssl-secret: secretName which should allow us to get the backend services certificates. But no matter how I setup the ca, etc, there service terminals before sending anything.

kubernetes: nginx ingress vs traefik ingress vs ha-proxy ingress vs kong ingress

We are looking at various opensource ingress controllers available for kubernetes and need to chose the best one among all. We are evaluating the below four ingress controllers
Nginx ingress controller
Traefik ingress controller
Ha-proxy ingress controller
Kong ingress controller
What are the difference between these In terms of features and performance and which one should be adopted in production. please provide your suggestions
One difference I’m aware of, is that haproxy and nginx ingresses can work in TCP mode, whereas traefik only works in HTTP/HTTPS modes. If you want to ingress services like SMTP or MQTT, then this is a useful distinction.
Also, haproxy supports the “PROXY” protocol, allowing you to pass real client IP to backend services. I used the haproxy ingress recently for a docker-mailserver helm chart - https://hub.helm.sh/charts/funkypenguin

Nginx ingress controller vs HAProxy load balancer

What is the difference between Nginx ingress controller and HAProxy load balancer in kubernetes?
First, let's have a quick overview of what an Ingress Controller is in Kubernetes.
Ingress Controller: controller that responds to changes in Ingress rules and changes its internal configuration accordingly
So, both the HAProxy ingress controller and the Nginx ingress controller will listen for these Ingress configuration changes and configure their own running server instances to route traffic as specified in the targeted Ingress rules. The main differences come down to the specific differences in use cases between Nginx and HAProxy themselves.
For the most part, Nginx comes with more batteries included for serving web content, such as configurable content caching, serving local files, etc. HAProxy is more stripped down, and better equipped for high-performance network workloads.
The available configurations for HAProxy can be found here and the available configuration methods for Nginx ingress controller are here.
I would add that Haproxy is capable of doing TLS / SSL offloading (SSL termination or TLS termination) for non-http protocols such as mqtt, redis and ftp type workloads.
The differences go deeper than this, however, and these issues go into more detail on them:
https://serverfault.com/questions/229945/what-are-the-differences-between-haproxy-and-ngnix-in-reverse-proxy-mode
HAProxy vs. Nginx

Resources