what happens when NginX server is shut down? - nginx

Imagine that we have a web application that is running in three different servers. And we have a NginX server that has a load balancer and redirects requests to these three servers.
So what happens to requests when the NginX server is no longer running? Are they redirect to one of the servers? If not how could we redirect them to one of the servers?

If one of the load balancing instances is down, requests will still get routed to that server, because nginx has no way of knowing upstream instance is failing. You'll get 502 Bad Gateway for one out of three requests.
To avoid down servers getting requests, you can use nginx's health checks.
NGINX and NGINX Plus can continually test your upstream servers, avoid the servers that have failed, and gracefully add the recovered servers into the load‑balanced group.
In your app, you can have a path /health_check that responds with 200 Status Code if the instance is OK and use this configuration.
http {
upstream backend {
server backend1.example.com;
server backend2.example.com;
}
server {
location / {
proxy_pass http://backend;
health_check uri=/health_check;
}
}
}

Related

Nginx load balancer method ip_hash user distribution problem

I am trying to configure nginx load balancer but with ip_hash method, nginx redirects users to only one server. Are there any other configurations to apply beneath the ip_hash method to distribute users to other unused servers?
We have 3 servers and backend with nginx load balancer that uses ip_hash as a method;
upstream backend {
ip_hash;
server IP:PORT;
server IP:PORT;
server IP:PORT;
}
We tried least_conn method to distribute users favourably but our application kicks users out after logging in for no reason. Tried to add a keepalive 10; but it did not work either.

How to implement failover of UDP traffic based health checks with HTTP response codes?

I have added the following to nginx.conf.
stream {
upstream wg_servers {
server wg_ip_#1:51820;
server wg_ip_#2:51820 backup;
}
server {
listen failover_ip:51820 udp;
proxy_pass wg_servers;
proxy_bind failover_ip:51820;
#location / {
# proxy_pass http://wg_servers;
# health_check port=8080;
#}
}
}
Basically, what I am trying to achieve is as follows:
I have two nginx VPS servers. One primary and one backup.
I have two WireGuard VPS servers. One primary and one backup.
Failover IP which will be moved by keepalived to a backup nginx load-balancing server if the primary nginx server fails
keepalived will monitor the primary and backup nginx servers
I have a script that responds with an HTTP code on each WireGuard server at / and port 8080 about the status of WireGuard, such as http 200.
Based on the HTTP health checks, I would want nginx to pass WireGuard's udp packets to the primary WireGuard server, or if the primary server is problematic (based on http codes), I would need to pass the udp packets to the backup WireGuard server
I would also need to pass the udp traffic, such that the traffic is returned from the failover_ip of the nginx server, and not from the ip of the WireGuard VPS server.
I went through this article and what I'm not sure of is:
How do I setup the HTTP checks? I added the backup flag to the secondary server, but how do I do the checking?
Is there anything else I should add/remove to make it more effective for my goals?

How to reroute SFTP traffic via NGINX

I'm trying to setup an FTP subdomain, such that all incoming SFTP requests to (say) ftp.myname.com, get routed to a particular internal server, (say) 10.123.456 via port 22.
How do I use nginx to route this traffic?
I've already setup the SFTP server, and can SFTP directly to the server, say:
sftp username#123.456.7890, which works fine.
The problem is that when I setup nginx to route all traffic to ftp.myname.com, it connects, but the passwords get rejected. I have no problems routing web traffic to my other subdomains, say dev.myname.com (with passwords), but it doesn't work for the SFTP traffic:
server {
listen 22;
server_name ftp.myname.com;
return .............
}
How do I define the return sting to route the traffic with the passwords?
The connection is SFTP (via port 22).
Thanks
Aswering to #peixotorms: yes, you can. nginx can proxy/load balance http as well as tcp and udp traffic, see nginx stream modules documentation (at the nginx main documentation page) , and specifically the stream core module's documentation.
You cannot do this on nginx (http only), you must use something like HaProxy and a simple dns record for your subdomain pointing to the server ip.
Some info: http://jpmorris-iso.blogspot.pt/2013/01/load-balancing-openssh-sftp-with-haproxy.html
Edit:
Since nginx version 1.15.2 it's now possible to do that using the variable $ssl_preread_protocol. The official blog added post about how to use this variable for multiplexing HTTPS and SSH on the same port.
https://www.nginx.com/blog/running-non-ssl-protocols-over-ssl-port-nginx-1-15-2/
Example of configuring SSH on an upstream block:
stream {
upstream ssh {
server 192.0.2.1:22;
}
upstream sslweb {
server 192.0.2.2:443;
}
map $ssl_preread_protocol $upstream {
default ssh;
"TLSv1.2" sslweb;
}
# SSH and SSL on the same port
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}

How to load balance containers?

How to load balance docker containers running a simple web application?
I have 3 web containers running in a single host. How do I load balance my web containers?
Put a load balancer, such as haproxy or nginx can even do the job.
Decent Haproxy Documentation
Nginx Howto
Either way, put the load balancer on the host or on a different server that can access the exposed ports on the containers. Nginx will probably be simpler for your needs.
To setup basic nginx load balancing:
http {
upstream myapp1 {
server CONTAINER_APP0_IP:PORT;
server CONTAINER_APP1_IP:PORT;
server CONTAINER_APP2_IP:PORT;
}
server {
listen 80;
location / {
proxy_pass http://myapp1;
}
}
}

How to use nginx or apache to process tcp inbound traffic and redirect to specific php processor?

This is the main idea, I want to use NGINX or Apache webservers as a tcp processor, so they manage all threads and connections and client sockets, all packets received from a port, lets say, port 9000 will be redirected to a program made on php or python, and that program will process each request, storing the data in a database. The big problem is also that this program needs to send data to the client or socket that is currently connecting to the NGINX or Apache server, I've been told that I should do something like this instead of creating my own TCP server, which is too difficult and is very hard to maintain since the socket communication with huge loads could lead in memory faults or even could crash down the server. I have done it before, and in fact the server crashed.
Any ideas how to achieve this ??
thanks.
apache/ nginx is web server and could be used to provide static content service to your cusomter and forwarding the application service requests to other application servers.
i only knows about django and here is sample configuration of nginx from Configuration for Django, Apache and Nginx
location / {
# proxy / requests to apache running django on port 8081
proxy_pass http://127.0.0.1:8081/;
proxy_redirect off;
}
location /media/ {
# serve static media directly from nginx
root /srv/anuva_project/www/;
expires 30d;
break;
}
Based on this configuration, the nginx access local static data for url under /media/*
and forward requests to django server located at localhost port 8018.
I have the feeling HAProxy is certainly a tool better suited for your needs, which have to do with TCP and not HTTP apparently. You should at least give it a try.

Resources