Patching Nginx to ip_hash 4 octets instead of 3 - nginx

I'm currently running two back end servers on my network and load balancing with Nginx on Windows.
I am load testing the system at the moment however all of my traffic is directed at one server. This is because the ip_hash algorithm sorts traffics by the first 3 octets i.e. 111.222.333.XXX
This is a problem because all of the traffic I am aiming at the server has the same base address (The same first 3 octets) therefore none of my traffic is going to the other server. Does anyone know a way to patch or change the ip_hash algorithm to filter through 4 octets.
Thanks

Nginx open source version supports the hash directive that may work similarly (not exactly the same though) to the sticky session mechanism provided by commercial version:
The generic hash method: the server to which a request is sent is
determined from a user-defined key which may be a text, variable, or
their combination. For example, the key may be a source IP and port,
or URI:
upstream backend {
hash $request_uri consistent;
server backend1.example.com;
server backend2.example.com;
}
https://www.nginx.com/resources/admin-guide/load-balancer/
So how do you use 4 octets from IPv4 with the hash method? Let's find how to get the client IP from the Embedded Variables section http://nginx.org/en/docs/http/ngx_http_core_module.html#variables
$remote_addr client address
So the code looks like:
upstream backend {
hash $remote_addr consistent;
server backend1.example.com;
server backend2.example.com;
}
UPDATE:
If take a look at the Stream module (TCP proxy), the very first example shows exact the same approach:
upstream backend {
hash $remote_addr consistent;
server backend1.example.com:12345 weight=5;
server backend2.example.com:12345;
server unix:/tmp/backend3;
}
server {
listen 12346;
proxy_pass backend;
}

Related

Is there a way to create a point to point (1 server per client) connection using Nginx?

Hi I am setting up a server with multiple docker containers that are all running an application (Iperf3) that can only host one client at a time for a bandwidth test.
Using Nginx I would like to provide a dedicated link for a few seconds until a test is performed in point to point manner.
Right now my code (as shown below) is very simple, I am listening for tcp and udp on port 5201 and proxying the connections to 2 servers.
My first approach was to limit the number of connections per server to 1 so that only one client can connect at a time. However, each tests generates multiple connections so limiting the connections per server using the max_conns server parameter did not help me.
Since each test generates multiple connections and they need to be sent to the same server for the test to be successful I included hash $remote_addr consistent; so that there is client to server affinity.
The problem with my setup below is that Nginx will send multiple clients to the same servers and the request will be dropped by the server if it already performing a test with another client.
stream{
upstream iperf_backends {
hash $remote_addr consistent;
server 127.0.0.1:5202;
server 127.0.0.1:5203;
}
server{
listen 5201;
listen 5201 udp;
proxy_pass iperf_backends;
}
}

NGINX Forwarding a request

I have an NGINX Server set up, I'd like to take a request and forward it to another application on a TCP port.
Let's say I have the following JSON payload
{
"someKey1": 1234,
"someKey2": "a string"
}
This is sent inside query parameters like the following
https://mywebsite.com?payload=%7B%0A%20%22someKey1%22%3A%201234%2C%0A%20%22someKey2%22%3A%20%22a%20string%22%0A%7D
Is there a way to forward that JSON payload to TCP port 1234 natively with NGINX?
Additionally, can I do any pre-processing of the above payload prior to it being forwarded to TCP port 1234. For example, I'd like to covert the above JSON to
someKey1=1234,someKey2="a string"
And then forward this data to TCP port 1234
I understang I'd have to create some sort of REST endpoint using something like springboot to do this, but I'd really like to try and accomplish the above natively with NGINX if possible.
Nginx's primary purpose is HTTP server/proxy.
It can be scripted via ngx_http_lua_module, but for your task it is much simpler to make an app/microservice that will listen HTTP and forward your custom protocol, or modify your app that listens mentioned port to understand HTTP.
When your endpoint talks HTTP - nginx can then be used for routing:
location /some_path/ {
proxy_pass http://localhost:1234/;
}
location /some_other_path/ {
proxy_pass http://localhost:1235/;
}
NGINX is simple web-server, which accepts HTTP requests and forwards them to configured location (may be application server, or any other web-server), and responds back on HTTP to the requester. Data can't be processed inside NGINX.
You can configure forwarding rules in default file under sites-available directory in NGINX installation directory.
Here is the nice tutorial of NGINX configuration which might help you.

nginx non http port redirection

Theres a server in a customer that runs a nginx, a salt master daemon from saltstack and a secret web app that does secret things.
Considerations:
In this scenario, theres only one ip, only one server and multiple DNS records available;
I have nginx running in port 80;
And salt master running in 6453;
A domain.example.com binding to that IP, exposing my nginx 80 port, that points to the secret webapp;
otherdomain.example.com binding to the same IP, exposing my nginx 80 port, that I want to use to proxy salt port.
That customer has a machine in other place, that does need to connect to the salt and the internet connection is provided by a secret organization and they only allow connections to port 80, no negotiation possible.
My question:
Is possible to use nginx to redirect the otherdomain.example.com 80 port to the 6453 port? I tried the following:
server {
listen 80;
server_name otherdomain.example.com;
proxy_pass 127.0.0.1:6453;
}
But that doesn't work as expected. It is possible? There's some way to do this using nginx?
The error I got from log was:
"proxy_pass" directive is not allowed here
proxy_pass needs to be specified within a location context, and is fundamentally a Web Thing. It only comes into play after the web headers are sent and interpreted.
Things like what you're trying to accomplish are commonly done using HAProxy in tcp mode, although there is a tcp proxy module that also does similar things.
However, I don't think you're going to be very successful, as ZMQ does not participate in the protocol (HTTP Host: headers) that easily allows you to tell the web requests apart from the non-web requests (that come in on the same port).
My recommendation is to either find some way to use another port for this, a second IP address, or write a tricky TCP proxier that'll identify incoming HTTP and/or ZMQ connections and transparently forward them to the correct local port.

Nginx proxy_pass to Minecraft server

I'm trying to run two Minecraft servers on the same machine on two different ports. I want to reference them based on subdomains:
one.example.com -> <minecraft>:25500
two.example.com -> <minecraft>:25501
I have used nginx for things like this before, but it's not working with Minecraft. It's responding with http status 400. Here is a sample from my log:
192.168.0.1 - - [21/Apr/2013:17:25:40 -0700] "\x02<\x00\x0E\x00t\x00h\x00e\x00s\x00a\x00n\x00d\x00y\x00m\x00a\x00n\x001\x002\x003\x00\x1C\x00t\x00e\x00s\x00t\x00.\x00r\x00y\x00a\x00n\x00s\x00a\x00n\x00d\x00y\x00.\x00i\x00s\x00-\x00a\x00-\x00g\x00e\x00e\x00k\x00.\x00c\x00o\x00m\x00\x00c\xDD" 400 173 "-" "-"
Here is my nginx config:
upstream mine1 {
server 127.0.0.1:25500;
}
upstream mine2 {
server 127.0.0.1:25501;
}
server {
listen 25565;
server_name one.example.com;
access_log /var/log/nginx/one.access;
error_log /var/log/nginx/one.error;
location / {
proxy_pass http://mine1;
}
}
server {
listen 25565;
server_name two.example.com;
access_log /var/log/nginx/two.access;
error_log /var/log/nginx/two.error;
location / {
proxy_pass http://mine2;
}
}
If I'm reading this correctly, nginx is responding with 400. My guess is the Minecraft client is not sending valid HTTP headers and Nginx is tossing out the request. But I'm totally at a loss. Any help would be appreciated.
try this in your DNS records
A RECORD
Name one.example.com
Value <server_ip>
TTL 86400
Name two.example.com
Value <server_ip>
TTL 86400
SRV RECORD
Name _minecraft._tcp.one.example.com
Port 25500
Value one.example.com
Name _minecraft._tcp.two.example.com
Port 25501
Value two.example.com
As Dag Nabbit stated, a Minecraft server does not talk http. You would typically do this via NAT. A proxy server needs to know the protocol, because as the name suggests, it acts on behalf of the the client. Nginx knows various protocols, not just http, but Minecraft is not one of them. You can however write a proxy module for this protocol and use the existing nginx infrastructure. Since I'm not familiar with the protocol, I can't comment on the fact that this would have any advantages over NAT.
One thing to note for future readers, while yes nginx does pass connections off as a "proxy" to any server:port listing that is defined though the upstream definition in a socks proxy style of connection. This does not work when nginx itself is listening for HTTP communications. This is simply because nginx is is designed by default as a dead simple static http server.
Any sort of reverse proxing of TCP/UDP connections is more scalable at a lower OSI level (ie layer 3 or layer 2 instead of layer 6/7 as nginx is operating at). This is where Source and Destination NATs come into play which is better handled by a firewall or routing policy directive of your edge device.
DNS-RR is not the best solution as this, while yes lower level OSI layering, is only viable if the end applications (layer 7 OSI) understand the method. Minecraft (or just about any game server) at last check did not have this built into the game's networking code.
Now I did look into this and there is a few solutions for minecraft itself that one should look further into:
Transporter plugin
BungeeCord
Be sure to read all the documentation as these are very complex to configure and install. Hench the recommendation to just use NAT-ed network topology instead.
I tried to setup my multiple minecraft instances with SRV but that also doesn't work
nslookup of my srv records show:
C:\Users\Administrator>nslookup -type=SRV _minecraft._tcp.xxx.net
Server: mijnmodem.kpn
Address: 192.168.1.1
Non-authoritative answer:
_minecraft._tcp.xxx.net SRV service location:
priority = 5
weight = 5
port = 25565
svr hostname = camelot.xxx.net
_minecraft._tcp.xxx.net SRV service location:
priority = 5
weight = 5
port = 25566
svr hostname = cityworld.xxx.net
On my router(ZTE H369) port 25565 and 25566 are straight forwarded (TCP and UDP) to the IP wher the instances run. Accessing the urls (in Minecraft) gives io.netty.channel.Abstart$AnnotatedConnectException
Any suggestions how to investigate further?

about ip_hash in nginx upstream module

I want to replace pound with nginx as loadbalancer and all tests look fine so far. I will do a typical upstream configuration like this:
upstream backend {
ip_hash;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
There are now 2 questions left open:
How long does this stickyness last? Is there a ttl to be defined somewhere?
Does the stickyness survive restarts and/or reloads of nginx?
I could not find the answer in the nginx wiki. Links to official docs are welcome.
It is based on client source ip address hash and as long as you have same set of backends stickiness will persist.
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ip_hash
It comes up when you do feel for need of session persistency. Scenario is like the users should be directed to same server as application demands based on previous connection.
ip_hash = key-value pair hashing [where key=visitor's ip, value=host server]

Resources