Nginx Ip_Hash calculation - nginx

We want to use Nginx as a LoadBalancer for our servers but we are concerned with session stickyness and ip_hash calculation. If for example server backend3 were to die and connections time out would marking it as down in the config change the ip_hash calculated for sessions?
upstream backend {
ip_hash;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com down;
}
As far as I know ip_hash is vaguely calculated on source IP, and the number of backends it can potentially connect to. I've been unable to find a direct answer to this in documentation or scouring the internet. Any answer is appreciated!

Related

nginx tcp stream (k8s) - keep client connection open when upstream closes

I have an application that accepts TCP traffic (not HTTP) and I'd like the ability to have the traffic load balanced to it. However, one requirement is that when a client makes a connection, we do not close that connection under any circumstances (ideally) since we are dealing with some clients with older technology.
I've set up the kubernetes nginx ingress controller, but it isn't behaving how I'm hoping/expecting. What I would like is: If the connection to one of the upstream servers closes, then the client connection remains open for some amount of time while nginx picks a new upstream server and starts sending data to it. I am not concerned about the stream's data being split across different upstream servers, I just need the connection to stay open from the client's perspective during something like a redeploy.
What is actually happening is that from my client's perspective, currently when the upstream server app closes the connection, my connection is closed and I have to reconnect.
The ingress controller has this configuration, which I thought would accomplish what I want, but it doesn't seem to be working as expected:
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-my-namespace-my-service-7550";
}
listen 7550;
proxy_timeout 600s;
proxy_next_upstream on;
proxy_next_upstream_timeout 600s;
proxy_next_upstream_tries 3;
proxy_pass upstream_balancer;
}
Any help at all is greatly appreciated and I'm happy to provide more info.
What you describe is how nginx works out of the box with http. However
Nginx has a detailed understanding of http
HTTP is a message based protocol i.e. uses requests and replies
Since nginx knows nothing about the protocol you are using, even if it uses a request/reply mechanism with no implied state, nginx does not know whether it has received a request not to to replay it elsewhere.
You need to implement a protol-aware mitm.
Unfortunately I haven't been able to get this functionality working with nginx. What I've ended up doing is writing my own basic TCP reverse-proxy that does what I need - if a connection to a backend instance is lost, it attempts to get a new one without interrupting the frontend connection. The traffic that we receive is fairly predictable in that I don't expect that moving the connection will interrupt any of the "logical" messages on the stream 99% of the time.
I'd still love to hear if anyone knows of an existing tool that has this functionality, but at the moment I'm convinced that there isn't one readily available.
I think you need to configure your Nginx Ingress to enable the keepalive options as listed in the documentation here. For instance in your nginx configuration as:
...
keepalive 32;
...
This will activate the keepalive functionality with a cache of upto 32 connections active at a time.

Nginx send all connections to server A before going to Server B before going to server C..etc

I am using nginx to balance connections to backend tcp servers using the stream directive. I have two separate questions in regards to balancing the connections as the default algorithms don't seem to be good enough.
Is it possible to load balance in a way that you first max out connections on Server A before moving on to Server B. Once B is maxed then move on to server C?
Is it possible to load balance in a way that you send the first 50 connections to server A, then the next 50 to server B. Once both have reached 50 repeat the process again for Server A and server B in a cycle until both have reached max load?
upstream tcpServerSocket {
server 127.0.0.1:9091;
server 127.0.0.1:9092;
}
server {
server 9090;
proxy_pass tcpServerSocket;
}
Currently I am using the algorithm choice of round robin which is not great for my use case. These are websockets if that helps.
You can configure Hash load balancing method as described in Nginx documentation: https://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash
To use with Websocket, you can try implementing the example shared in this gist: https://gist.github.com/gihad/25b3c87f35b20b3d3bb6ad589ea42974

Running Minecraft server through NGINX - Only ports 80 and 443 available

I've been looking at possible ways to run a Minecraft server behind a reverse proxy on NGINX with a IP/location/to/server (e.g. 127.0.0.1/minecraft-server) connection, but the only information that I've found is to either use a SRV DNS record, or to use a stream proxy (but no further information is included about this possibility, or it does not provide a NGINX location config).
I need to use a NGINX reverse proxy as ports 80 and 443 will be the only ports that will be open externally via our provider (HTTP/S servers only allowed, they can't be used for anything else and the connection will be managed by administrators), and I don't have a domain. I can get one and a SSL certificate if that's all that's needed in order to be able to do this.
I know Minecraft runs on a TCP or UDP connection, and that's part of the reason why this is not an easy task, but since this is the only way I can possibly have future external access to my Minecraft Server (self hosted), I need a way to run the connection through an HTTP reverse proxy.
Is there any way to do this through NGINX or NGINX+other software?
Thank you in advance.

Is there a way to create a point to point (1 server per client) connection using Nginx?

Hi I am setting up a server with multiple docker containers that are all running an application (Iperf3) that can only host one client at a time for a bandwidth test.
Using Nginx I would like to provide a dedicated link for a few seconds until a test is performed in point to point manner.
Right now my code (as shown below) is very simple, I am listening for tcp and udp on port 5201 and proxying the connections to 2 servers.
My first approach was to limit the number of connections per server to 1 so that only one client can connect at a time. However, each tests generates multiple connections so limiting the connections per server using the max_conns server parameter did not help me.
Since each test generates multiple connections and they need to be sent to the same server for the test to be successful I included hash $remote_addr consistent; so that there is client to server affinity.
The problem with my setup below is that Nginx will send multiple clients to the same servers and the request will be dropped by the server if it already performing a test with another client.
stream{
upstream iperf_backends {
hash $remote_addr consistent;
server 127.0.0.1:5202;
server 127.0.0.1:5203;
}
server{
listen 5201;
listen 5201 udp;
proxy_pass iperf_backends;
}
}

about ip_hash in nginx upstream module

I want to replace pound with nginx as loadbalancer and all tests look fine so far. I will do a typical upstream configuration like this:
upstream backend {
ip_hash;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
There are now 2 questions left open:
How long does this stickyness last? Is there a ttl to be defined somewhere?
Does the stickyness survive restarts and/or reloads of nginx?
I could not find the answer in the nginx wiki. Links to official docs are welcome.
It is based on client source ip address hash and as long as you have same set of backends stickiness will persist.
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ip_hash
It comes up when you do feel for need of session persistency. Scenario is like the users should be directed to same server as application demands based on previous connection.
ip_hash = key-value pair hashing [where key=visitor's ip, value=host server]

Resources