Nginx server behind Nginx-Ingress controller - nginx

We decided to move our apps from Service:LoadBalancer to Ingress, and I've chosen Nginx Ingress Controller, as I'm familiar with it, and because it's one of the most popular controllers in Kubernetes world
Previously we had Nginx => Uwsgi combination that stands behind ELB. We compile nginx from source, as we need some 3rd party modules and Lua support.
ELB => Nginx Server => UWSGI
ELB => Nginx Ingress (Load Balancer) => Nginx (Server) => UWSGI
My question is: is it okay to have 2 nginx in a proxy chain?
I understand that one plays the role of LoadBalancer, and another is a server itself. But for me it comes with a pain, like if I change some options in Server nginx.conf, like increase the size of client body to 8MB, I should do the same on Nginx-Ingress. Also I'm wondering how to set timeouts: as there is a timeout between ingress=>server and server=>uwsgi, and in general how to tune the performance while having 3 proxies before request hits the app?
Is it a good practice to remove Nginx Server, so Ingress Controller acts like a server and loadbalancer at the same time? What about 3rd party modules that we use?

There's nothing wrong in principle with having 2 or more nginx in a proxy chain, other than, as alluded to in the question and in the below, the extra complexity.
It is a pain to maintain consistent configuration across multiple proxies, and in particular to have upstream configuration bleed into ingress. It can get very complicated when the same ingress serves multiple upstreams each with different traffic requirements. But this is often nevertheless unavoidable.
Each hop will have its own distinct timeout and retry configuration, and managing them can be complicated, especially the downstream timeout when upstream has retries. One can wind up with very strange failure patterns.
It is not a good idea to bundle an application with an ingress controller. Ingress is about offering a stable entry point into the cluster for out-of-cluster traffic, and distributing that traffic to multiple upstream applications in the cluster. If there is only one upstream application, one really does not need ingress, so if possible much better to just expose it as a Service, either using NodePort or LoadBalancer, depending on circumstance.

Related

Putting NGINX in front of kafka

I have elasticsearch, filebeat and kibana behind NGINX server and all three of them uses ssl and basic authentication of Nginx reverse proxy. I want to place kafka behind NGINX as well. Kafka is communicating with filebeat. Is there any possible way that filebeat (with ssl) and kafka (without ssl) can communicate?
I mean is there any exception kind of thing that we can add in NGINX configuration?
There's not much benefit to using Nginx with Kafka beyond the initial client connection. In other words, yes, you can use stream directive, in theory, and point bootstrap.servers at it, but Kafka will return its advertised.listeners after that, and clients then bypass Nginx to communicate directly with individual brokers (including authentication)
Related
Allow access to kafka via nginx

Differences between HTTP Web Server and Ingress?

I am learning the world of k8s and there is a lot of talk about ingress and ingress controllers. Conceptually it sounds identical to a web server which I will define as a service that proxies HTTP requests to web application servers. It can serve up certificates and do basic load balancing...
Whereas ingress: Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. Ingress may provide load balancing, SSL termination and name-based virtual hosting.
https://kubernetes.io/docs/concepts/services-networking/ingress/
They sound the same! So what exactly is the difference here? I can't be the only one confused by this right?
In general Web Server's is responsible for accepting and fulfilling requests from clients.
A web server‘s fundamental job is to accept and fulfill requests from clients for static content from a website (HTML pages, files, images, video, and so on). The client is almost always a browser or mobile application and the request takes the form of a Hypertext Transfer Protocol (HTTP) message, as does the web server’s response.
Lately you can find many web servers like Apache or Nginx.
Kubernetes Ingress is an API object. In IBM blog - What is Kubernetes Ingress and why is it useful?
Kubernetes Ingress is an API object that provides routing rules to manage external users' access to the services in a Kubernetes cluster, typically via HTTPS/HTTP. With Ingress, you can easily set up rules for routing traffic without creating a bunch of Load Balancers or exposing each service on the node. This makes it the best option to use in production environments.
Also in Kubernetes Ingress Docs you can find that Kubernetes Ingress needs Ingress Controller.
You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
There are many ingress controllers like Nginx, Ambassador, Apache, etc.
To sum up:
To use Ingress you need some Web Server as Ingress Controller.
Kubernetes Ingress is an Kubernetes object which helps user to configure Web Server (like Nginx) in Kubernetes Clusters.
As you pointed in documentation it allows you to configure some HTTP/HTTPS routing, traffic load balancing, terminate SSL / TLS, etc.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

In-cluster nginx proxy (sidecar) in Kubernetes to re-write headers

Background
I have an in-cluster Kubernetes pod/application that works fine when accessing it via an nginx-ingress ingress controller (requires specific Host HTTP header), but it cannot be accessed by other in-cluster pods/applications (i.e. for testing) due to the pods using different host names (e.g. service-name.namespace.svc.cluster.local) rather than the FQDN of the K8S master (in the LAN).
Plan So Far
I think the only way to (easily) resolve this is to setup an in-cluster forward-proxy nginx instance. Ideally, the service is either a side-car for the pod that needs to have headers re-written, or it needs to be a general in-cluster proxy that multiple services can access.
Question
How would I setup an in-cluster nginx forward proxy service?
Should it be a sidecar, or a general service any pod can access?
Work So Far
The linked "similar" questions don't appear to be helpful for my use case (i.e. don't show how to configure an in-cluster proxy), or are focused on proxying to IPs external to the cluster (i.e. I need to proxy HTTP requests, and re-write their headers, to in-cluster resources).

Nginx ingress controller vs HAProxy load balancer

What is the difference between Nginx ingress controller and HAProxy load balancer in kubernetes?
First, let's have a quick overview of what an Ingress Controller is in Kubernetes.
Ingress Controller: controller that responds to changes in Ingress rules and changes its internal configuration accordingly
So, both the HAProxy ingress controller and the Nginx ingress controller will listen for these Ingress configuration changes and configure their own running server instances to route traffic as specified in the targeted Ingress rules. The main differences come down to the specific differences in use cases between Nginx and HAProxy themselves.
For the most part, Nginx comes with more batteries included for serving web content, such as configurable content caching, serving local files, etc. HAProxy is more stripped down, and better equipped for high-performance network workloads.
The available configurations for HAProxy can be found here and the available configuration methods for Nginx ingress controller are here.
I would add that Haproxy is capable of doing TLS / SSL offloading (SSL termination or TLS termination) for non-http protocols such as mqtt, redis and ftp type workloads.
The differences go deeper than this, however, and these issues go into more detail on them:
https://serverfault.com/questions/229945/what-are-the-differences-between-haproxy-and-ngnix-in-reverse-proxy-mode
HAProxy vs. Nginx

HAProxy vs. Nginx

I was looking at using HAProxy and Nginx for load balancing, and I had some questions:
Should I use JUST HAProxy over Nginx for the proxy server?
Is there any reason to have HAProxy and Nginx installed on the same proxy server?
Haproxy is a "load balancer" it doesn't know to serve files or dynamic content. nginx is a web server capable of many interesting things. if you only need to load balance + HA some third web server then haproxy is enough. if you need to implement some static content or some logic in routing of the requests before terminating them on a third server then you may need nginx.
The reason you can see haproxy+nginx on the same host is that it allows you to bring down single nginx instances while haproxy continues to serve requests from other hosts. Imagine having a RR DNS using A records:
myapp.com IN A 1.1.1.1
myapp.com IN A 1.1.1.2
Where 1.1.1.1 and 1.1.1.2 are two hosts with haproxy+nginx configured to load balance between them. Now for some reason your 1.1.1.1's nginx goes down. The browsers that come to 1.1.1.1 are still being served by haproxy on it which in turn gets data from 1.1.1.2's nginx.
HAProxy is definitely the better, more fully featured loadbalancer (compared to the free nginx, not nginx plus (but one could argue that as well).
One thing that HAProxy sadly still can't do is generic UDP connections. So we used HAProxy and nginx on our logging lbs. But HAProxy released support for syslog/udp in 2.3 so we are about to change that. :)
We use HAProxy together with nginx. There are a number of reasons.
Nginx can do everything (more or less) but you don't want your load balancer serving web pages. Some error in config (which might have nothing to do with load balancing) and your entire setup comes to a screeching halt. Imagine that you have a Nodejs app, a Dotnet Core app, static files served by Nginx, and a php app. You just make some mistake and your 4 apps come to a standstill. You have lost your redundancy too if you have multiple instances of each app.
Even if you say that Nginx will only do the load balancing, Nginx doesn't support PROXY Protocol which is problematic if you forward to other servers who are also not serving the pages.
In addition there is something to be said for doing one thing and doing it well. Nginx is the master toolbox today. It does almost everything. Your load balancer is supposed to be the most stable part of your setup. Wouldn't you prefer to use something that was built just for load balancing?
If you use varnish then HAProxy works well with it and in fact they are made by the same people.
If you want an added level of balance then you can also use dns as a load balancer with multiple HAPROXY instances. Dns is not meant for this perse but you will always have some weak link. Your load balancer can crash too even if it's managed by your cloud provider. Most web browsers today will try other servers if there is more than one in your dns entry so it's like a load balancer. Your dns should be very reliable thus increasing your uptime.
We use 2 haproxy instances with 2 varnish instances with two dns entries.

Resources