Nginx load balancer method ip_hash user distribution problem - nginx

I am trying to configure nginx load balancer but with ip_hash method, nginx redirects users to only one server. Are there any other configurations to apply beneath the ip_hash method to distribute users to other unused servers?
We have 3 servers and backend with nginx load balancer that uses ip_hash as a method;
upstream backend {
ip_hash;
server IP:PORT;
server IP:PORT;
server IP:PORT;
}
We tried least_conn method to distribute users favourably but our application kicks users out after logging in for no reason. Tried to add a keepalive 10; but it did not work either.

Related

NGINX - Return backend server name in the response header

I am running a nginx server and using it as reverse proxy (using proxy pass). See below image for my setup.
In the setup, user hits NGINX. NGINX based on proxy pass hits another backend server to get response. This backend server can be load balanced in some cases. In a load balanced scenario, system will hit one of the backend server to get response.
What I want: When response is returned from backend server, I want to include the backend server name in the response header. How can I get the desired output? Do I also need to make changes in backend servers\application? Application on backend servers are running on different web servers (IIS, Tomcat, NGINX etc)

what happens when NginX server is shut down?

Imagine that we have a web application that is running in three different servers. And we have a NginX server that has a load balancer and redirects requests to these three servers.
So what happens to requests when the NginX server is no longer running? Are they redirect to one of the servers? If not how could we redirect them to one of the servers?
If one of the load balancing instances is down, requests will still get routed to that server, because nginx has no way of knowing upstream instance is failing. You'll get 502 Bad Gateway for one out of three requests.
To avoid down servers getting requests, you can use nginx's health checks.
NGINX and NGINX Plus can continually test your upstream servers, avoid the servers that have failed, and gracefully add the recovered servers into the load‑balanced group.
In your app, you can have a path /health_check that responds with 200 Status Code if the instance is OK and use this configuration.
http {
upstream backend {
server backend1.example.com;
server backend2.example.com;
}
server {
location / {
proxy_pass http://backend;
health_check uri=/health_check;
}
}
}

Mixed content issue in using Application Load Balancer (ALB) in AWS

I have an ASP.Net web application hosted on IIS. The web application (an Umbraco site) is configured to have an HTTP binding in IIS and an SSL certificate is bound to an Application Load Balancer (ALB) in AWS which is used to manage user requests via HTTPS. This means that when a user requests a resource the ALB redirects any HTTP traffic to HTTPS and then forwards the requests to IIS via the port 80 (internal traffic within the VPC).
For most resources this is absolutely fine but there are a handfull of resources (fonts and images) which seem to be requested over HTTP which causes a mixed content warning in the browser. I have tried HTTP -> HTTPS rewrite rules in IIS and outbound rules to rewrite the response but this does not seem to resolve the issue.
Can anyone help?
The solution to the problem was this to run the the web-app locally as HTTPS rather than HTTP and update the load balancer to forward requests to the web-server on port 443 rather than port 80.
To do so
Create a development SSL certificate on IIS. Rather than creating a self-signed certificate I used this project (https://github.com/FiloSottile/mkcert) to do so that the certificate was tusted
In AWS update the target group that the ALB listener used to forward requests to the IIS server on port 443 rather than port 80.

How to load balance containers?

How to load balance docker containers running a simple web application?
I have 3 web containers running in a single host. How do I load balance my web containers?
Put a load balancer, such as haproxy or nginx can even do the job.
Decent Haproxy Documentation
Nginx Howto
Either way, put the load balancer on the host or on a different server that can access the exposed ports on the containers. Nginx will probably be simpler for your needs.
To setup basic nginx load balancing:
http {
upstream myapp1 {
server CONTAINER_APP0_IP:PORT;
server CONTAINER_APP1_IP:PORT;
server CONTAINER_APP2_IP:PORT;
}
server {
listen 80;
location / {
proxy_pass http://myapp1;
}
}
}

Running nginx infront of a unicorn or gunicorn under Elastic Load Balancer

I have a very simple question. Nginx does reverse proxy buffering for HTTP servers like Gunicorn and Unicorn. However if I have a Elastic Load Balancer (offered by Amazon Web Services also known as -- ELB) is there any point in running nginx in front of my app server?
Request----> ELB -------> NGINX-------> UNICORN/GUNICORN HTTP SERVER
In a word: yes. Amazon's ELB service is wonderful, but it is solely a load balancer. Running nginx on your own server gives you a locus of control and a place to do rewrites, redirects, compression, header munging, caching, and more. Furthermore it allows you to serve static files in the fastest possible way, rather than using a slot on your more heavyweight appserver.

Resources