about ip_hash in nginx upstream module - nginx

I want to replace pound with nginx as loadbalancer and all tests look fine so far. I will do a typical upstream configuration like this:
upstream backend {
ip_hash;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
There are now 2 questions left open:
How long does this stickyness last? Is there a ttl to be defined somewhere?
Does the stickyness survive restarts and/or reloads of nginx?
I could not find the answer in the nginx wiki. Links to official docs are welcome.

It is based on client source ip address hash and as long as you have same set of backends stickiness will persist.
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ip_hash

It comes up when you do feel for need of session persistency. Scenario is like the users should be directed to same server as application demands based on previous connection.
ip_hash = key-value pair hashing [where key=visitor's ip, value=host server]

Related

nginx tcp stream (k8s) - keep client connection open when upstream closes

I have an application that accepts TCP traffic (not HTTP) and I'd like the ability to have the traffic load balanced to it. However, one requirement is that when a client makes a connection, we do not close that connection under any circumstances (ideally) since we are dealing with some clients with older technology.
I've set up the kubernetes nginx ingress controller, but it isn't behaving how I'm hoping/expecting. What I would like is: If the connection to one of the upstream servers closes, then the client connection remains open for some amount of time while nginx picks a new upstream server and starts sending data to it. I am not concerned about the stream's data being split across different upstream servers, I just need the connection to stay open from the client's perspective during something like a redeploy.
What is actually happening is that from my client's perspective, currently when the upstream server app closes the connection, my connection is closed and I have to reconnect.
The ingress controller has this configuration, which I thought would accomplish what I want, but it doesn't seem to be working as expected:
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-my-namespace-my-service-7550";
}
listen 7550;
proxy_timeout 600s;
proxy_next_upstream on;
proxy_next_upstream_timeout 600s;
proxy_next_upstream_tries 3;
proxy_pass upstream_balancer;
}
Any help at all is greatly appreciated and I'm happy to provide more info.
What you describe is how nginx works out of the box with http. However
Nginx has a detailed understanding of http
HTTP is a message based protocol i.e. uses requests and replies
Since nginx knows nothing about the protocol you are using, even if it uses a request/reply mechanism with no implied state, nginx does not know whether it has received a request not to to replay it elsewhere.
You need to implement a protol-aware mitm.
Unfortunately I haven't been able to get this functionality working with nginx. What I've ended up doing is writing my own basic TCP reverse-proxy that does what I need - if a connection to a backend instance is lost, it attempts to get a new one without interrupting the frontend connection. The traffic that we receive is fairly predictable in that I don't expect that moving the connection will interrupt any of the "logical" messages on the stream 99% of the time.
I'd still love to hear if anyone knows of an existing tool that has this functionality, but at the moment I'm convinced that there isn't one readily available.
I think you need to configure your Nginx Ingress to enable the keepalive options as listed in the documentation here. For instance in your nginx configuration as:
...
keepalive 32;
...
This will activate the keepalive functionality with a cache of upto 32 connections active at a time.

facing an issue with haproxy / nginx

I need to setup a reverse proxy server which would distribute traffic to the backend servers based on the incoming HOST header.
I opted for HAproxy for this but after setting up everything I realized that HAproxy reads the configuration just once when the service starts and continues to use the backend IP address unless it has been reloaded/restarted.
This is an issue for me since in my case if the backend server reboots it will have a different IP address and I dont have control on which IP address it gets.
I am thinking of moving to nginx server but before I go through all the setup I would like to know if we have the same issue with Nginx or not?
Meaning: If in the configuration file I have specific the name of backend server and if the related IP address changes, will Nginx refresh its dns cache to identify the new IP address?
(When the backend server changes IP, it is automatically updated in the hosts file of proxy server)
Yes, nginx will do the job. See 'resolve' option here:
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server

Patching Nginx to ip_hash 4 octets instead of 3

I'm currently running two back end servers on my network and load balancing with Nginx on Windows.
I am load testing the system at the moment however all of my traffic is directed at one server. This is because the ip_hash algorithm sorts traffics by the first 3 octets i.e. 111.222.333.XXX
This is a problem because all of the traffic I am aiming at the server has the same base address (The same first 3 octets) therefore none of my traffic is going to the other server. Does anyone know a way to patch or change the ip_hash algorithm to filter through 4 octets.
Thanks
Nginx open source version supports the hash directive that may work similarly (not exactly the same though) to the sticky session mechanism provided by commercial version:
The generic hash method: the server to which a request is sent is
determined from a user-defined key which may be a text, variable, or
their combination. For example, the key may be a source IP and port,
or URI:
upstream backend {
hash $request_uri consistent;
server backend1.example.com;
server backend2.example.com;
}
https://www.nginx.com/resources/admin-guide/load-balancer/
So how do you use 4 octets from IPv4 with the hash method? Let's find how to get the client IP from the Embedded Variables section http://nginx.org/en/docs/http/ngx_http_core_module.html#variables
$remote_addr client address
So the code looks like:
upstream backend {
hash $remote_addr consistent;
server backend1.example.com;
server backend2.example.com;
}
UPDATE:
If take a look at the Stream module (TCP proxy), the very first example shows exact the same approach:
upstream backend {
hash $remote_addr consistent;
server backend1.example.com:12345 weight=5;
server backend2.example.com:12345;
server unix:/tmp/backend3;
}
server {
listen 12346;
proxy_pass backend;
}

nginx non http port redirection

Theres a server in a customer that runs a nginx, a salt master daemon from saltstack and a secret web app that does secret things.
Considerations:
In this scenario, theres only one ip, only one server and multiple DNS records available;
I have nginx running in port 80;
And salt master running in 6453;
A domain.example.com binding to that IP, exposing my nginx 80 port, that points to the secret webapp;
otherdomain.example.com binding to the same IP, exposing my nginx 80 port, that I want to use to proxy salt port.
That customer has a machine in other place, that does need to connect to the salt and the internet connection is provided by a secret organization and they only allow connections to port 80, no negotiation possible.
My question:
Is possible to use nginx to redirect the otherdomain.example.com 80 port to the 6453 port? I tried the following:
server {
listen 80;
server_name otherdomain.example.com;
proxy_pass 127.0.0.1:6453;
}
But that doesn't work as expected. It is possible? There's some way to do this using nginx?
The error I got from log was:
"proxy_pass" directive is not allowed here
proxy_pass needs to be specified within a location context, and is fundamentally a Web Thing. It only comes into play after the web headers are sent and interpreted.
Things like what you're trying to accomplish are commonly done using HAProxy in tcp mode, although there is a tcp proxy module that also does similar things.
However, I don't think you're going to be very successful, as ZMQ does not participate in the protocol (HTTP Host: headers) that easily allows you to tell the web requests apart from the non-web requests (that come in on the same port).
My recommendation is to either find some way to use another port for this, a second IP address, or write a tricky TCP proxier that'll identify incoming HTTP and/or ZMQ connections and transparently forward them to the correct local port.

Nginx Ip_Hash calculation

We want to use Nginx as a LoadBalancer for our servers but we are concerned with session stickyness and ip_hash calculation. If for example server backend3 were to die and connections time out would marking it as down in the config change the ip_hash calculated for sessions?
upstream backend {
ip_hash;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com down;
}
As far as I know ip_hash is vaguely calculated on source IP, and the number of backends it can potentially connect to. I've been unable to find a direct answer to this in documentation or scouring the internet. Any answer is appreciated!

Resources