HAProxy vs. Nginx - nginx

I was looking at using HAProxy and Nginx for load balancing, and I had some questions:
Should I use JUST HAProxy over Nginx for the proxy server?
Is there any reason to have HAProxy and Nginx installed on the same proxy server?

Haproxy is a "load balancer" it doesn't know to serve files or dynamic content. nginx is a web server capable of many interesting things. if you only need to load balance + HA some third web server then haproxy is enough. if you need to implement some static content or some logic in routing of the requests before terminating them on a third server then you may need nginx.
The reason you can see haproxy+nginx on the same host is that it allows you to bring down single nginx instances while haproxy continues to serve requests from other hosts. Imagine having a RR DNS using A records:
myapp.com IN A 1.1.1.1
myapp.com IN A 1.1.1.2
Where 1.1.1.1 and 1.1.1.2 are two hosts with haproxy+nginx configured to load balance between them. Now for some reason your 1.1.1.1's nginx goes down. The browsers that come to 1.1.1.1 are still being served by haproxy on it which in turn gets data from 1.1.1.2's nginx.

HAProxy is definitely the better, more fully featured loadbalancer (compared to the free nginx, not nginx plus (but one could argue that as well).
One thing that HAProxy sadly still can't do is generic UDP connections. So we used HAProxy and nginx on our logging lbs. But HAProxy released support for syslog/udp in 2.3 so we are about to change that. :)

We use HAProxy together with nginx. There are a number of reasons.
Nginx can do everything (more or less) but you don't want your load balancer serving web pages. Some error in config (which might have nothing to do with load balancing) and your entire setup comes to a screeching halt. Imagine that you have a Nodejs app, a Dotnet Core app, static files served by Nginx, and a php app. You just make some mistake and your 4 apps come to a standstill. You have lost your redundancy too if you have multiple instances of each app.
Even if you say that Nginx will only do the load balancing, Nginx doesn't support PROXY Protocol which is problematic if you forward to other servers who are also not serving the pages.
In addition there is something to be said for doing one thing and doing it well. Nginx is the master toolbox today. It does almost everything. Your load balancer is supposed to be the most stable part of your setup. Wouldn't you prefer to use something that was built just for load balancing?
If you use varnish then HAProxy works well with it and in fact they are made by the same people.
If you want an added level of balance then you can also use dns as a load balancer with multiple HAPROXY instances. Dns is not meant for this perse but you will always have some weak link. Your load balancer can crash too even if it's managed by your cloud provider. Most web browsers today will try other servers if there is more than one in your dns entry so it's like a load balancer. Your dns should be very reliable thus increasing your uptime.
We use 2 haproxy instances with 2 varnish instances with two dns entries.

Related

Docker, nginx and several sites on one server

I have server with nginx and one working app. I want to add several apps to this servers. I would like to assimilate a few things for myself.
What is the difference between load balancer and reverse proxy?
In which situations should I use the first, and in which situations should I use the second?
What should I use if my sites are static, and what if not static?
And additionally it would be a big plus to hear about containers in the context of several sites for nginx
Differences between load balancer and reverse proxy
A reverse proxy accepts a request from a client, forwards it to a server that can fulfill it, and returns the server’s response to the client.
A load balancer distributes incoming client requests among a group of servers, in each case returning the response from the selected server to the appropriate client.
Taken from nginx docs
TL;DR :
Reverse proxying is about : routing requests to the correct server using the domain name
Load balancing is about : distributing load to multiple instances
What should I use if my sites are static, and what if not static?
You can combine an HTTP reverse proxy + load balancing with both static and non static web apps, so it depends.
And additionally it would be a big plus to hear about containers in the context of several sites for nginx
I recommend one nginx container per app / site + a dynamic reverse proxy, traefik in particular (http://traefik.io)
You need a reverse proxy to route the incoming traffic to the proper application taking into account the content of the original request (and rules that you may define).
When the target application(s) is determined, you will need to load balance them in order to distribute the amount of work across them.
Both tasks can be done by software like classic nginx, apache, haproxy, etc or by those that are designed for the microservices world, like fabio, traefik and others.

How to Eliminate Nginx from the Production Stack via using Cloudflare as a substitutable Reverse Proxy?

Is it possible to use "cloudflare" as a reverse proxy for hosting several websites on the same host machine but on different ports?
Cloudflare can replace some of the features of Nginx, specifically:
Caching resources
Rate limiting and protecting your website
Redirecting access to your website to another server
But you still need Nginx or another web server for the following tasks:
Handling the TCP connections between Cloudflare and the server which generates the response (+ HTTPS should be used)
Generating the actual response, via FastCGI (PHP, Python, Ruby, etc.) or just delivering a file/resource (server and location blocks in Nginx)
Setting the correct headers for the response, for caching and content type (Cloudflare relies on these)
Cloudflare does not support sending your requests to specific ports on the origin host - but that would still not help you much, because Cloudflare has a very specific feature set, and generating responses is not part of them, which is why you need a web server.
If you want to reduce the work needed to maintain Nginx, you can restrict Nginx to only reply to requests by Cloudflare and do the rate limiting and some other tasks in Cloudflare.

Why use gunicorn with a reverse-proxy?

From Gunicorn's documentation:
Deploying Gunicorn
We strongly recommend to use Gunicorn behind a proxy server.
Nginx Configuration
Although there are many HTTP proxies available, we strongly advise that
you use Nginx. If you choose another proxy server you need to make sure
that it buffers slow clients when you use default Gunicorn workers.
Without this buffering Gunicorn will be easily susceptible to
denial-of-service attacks. You can use slowloris to check if your proxy
is behaving properly.
Why is it strongly recommended to use a proxy server, and how would the buffering prevent DOS attacks?
According to the Nginx documentation, a reverse proxy can be used to provide load balancing, provide web acceleration through caching or compressing inbound and outbound data, and provide an extra layer of security by intercepting requests headed for back-end servers.
Gunicorn is designed to be an application server that sits behind a reverse proxy server that handles load balancing, caching, and preventing direct access to internal resources.
By exposing Gunicorn's synchronous workers directly to the internet, a DOS attack could be performed by creating a load that trickles data to the servers, like the Slowloris.
The reason is that there are many slow clients that need time to consume server responses, while Gunicorn is designed to respond fast. There is an explanation of this situation for a similar web server for Ruby called Unicorn.

Running nginx infront of a unicorn or gunicorn under Elastic Load Balancer

I have a very simple question. Nginx does reverse proxy buffering for HTTP servers like Gunicorn and Unicorn. However if I have a Elastic Load Balancer (offered by Amazon Web Services also known as -- ELB) is there any point in running nginx in front of my app server?
Request----> ELB -------> NGINX-------> UNICORN/GUNICORN HTTP SERVER
In a word: yes. Amazon's ELB service is wonderful, but it is solely a load balancer. Running nginx on your own server gives you a locus of control and a place to do rewrites, redirects, compression, header munging, caching, and more. Furthermore it allows you to serve static files in the fastest possible way, rather than using a slot on your more heavyweight appserver.

What is the benefit of using NginX for Node.js?

From what I understand Node.js doesnt need NginX to work as a http server (or a websockets server or any server for that matter), but I keep reading about how to use NginX instead of Node.js internal server and cant find of a good reason to go that way
Here http://developer.yahoo.com/yui/theater/video.php?v=dahl-node Node.js author says that Node.js is still in development and so there may be security issues that NginX simply hides.
On the other hand, in case of a heavy traffic NginX will be able to split the job between many Node.js running servers.
In addition to the previous answers, there’s another practical reason to use nginx in front of Node.js, and that’s simply because you might want to run more than one Node app on your server.
If a Node app is listening on port 80, you are limited to that one app. If nginx is listening on port 80 it can proxy the requests to multiple Node apps running on other ports.
It’s also convenient to delegate TLS/SSL/HTTPS to Nginx. Doing TLS directly in Node is possible, but it’s extra work and error-prone. With Nginx (or another proxy) in front of your app, you don’t have to worry about it and there are tools to help you securely configure it.
But be prepared: nginx don't support http 1.1 while talking to backend so features like keep-alive or websockets won't work if you put node behind the nginx.
UPD: see nginx 1.2.0 - socket.io - HTTP/1.1 - Proxy websocket connections for more up-to-date info.

Resources