Is there a way to specify port range in nginx config upstream block? - nginx

I'm looking for a way to specify a port range in the nginx upstream block.
Is there a way to turn this:
upstream backend {
least_conn;
server 127.0.0.1:3000;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
server 127.0.0.1:3004;
server 127.0.0.1:3005;
}
into something like this?:
upstream backend {
least_conn;
server 127.0.0.1:[3000:3005]
}

One way to approach to operate host with Openresty which is based on Nginx and is capable of running Lua plugins. The snippet of code to make it work would look like this
upstream backend {
balancer_by_lua_block {
local balancer = require "ngx.balancer"
local start_port=3000
local max_port=start_port+5
repeat
local ok, err = balancer.set_current_peer('127.0.0.1', start_port)
if not ok then
ngx.log(ngx.ERR, "failed to set the current peer: ", err)
return ngx.exit(500)
end
start_port=start_port+1
until start_port>max_port
}

Related

How to force NGINX to use backup upstream and vice versa?

Maybe it's uncommon but i'd love to use an upstream definition in my nginx loadbalancer, which looks like this:
upstream backend {
server primary.local.net:80;
server backup.local.net:80 backup;
}
to aid maintenance processes for those hosts. First i prepare backup.local.net with newest software, then switch over the service to backup and do the same with primary.local.net. In the end, again switch back to primary.
Right now i'm doing this by loading a second configuration file:
upstream backend {
server primary.local.net:80 backup;
server backup.local.net:80;
}
using the command:
nginx -s reload
But this is laborious and hope there is a much smarter way to do this?
First of all, using upstream definitions in NGINX should NOT be uncommon! It's the preferred way of doing it.
Unfortunately, there is not really an easy solution for NGINX OpenSource. But why not trying to build something that does not require any config reload.
So given we have two upstream defitions like mentioned above
upstream blue{
server primary.local.net:80;
server backup.local.net:80 backup;
}
upstream green{
server primary.local.net:80;
server backup.local.net:80 backup;
}
Blue is primary and green is secondary. If you are saying you prepare something, do you think it would be possible to have something on your backend telling NGINX what deployment is currently active. Blue or Green?
Another option could be a file on your NGINX instance keeping that information. njs will be able to read from that file and define the upstream to be used based on the information provided.
https://nginx.org/en/docs/njs/reference.html#njs_api_fs
Quick POC:
upstream.conf
upstream blue {
server 127.1:9000;
server 127.1:9100 backup;
}
upstream green {
server 127.1:9000 backup;
server 127.1:9100;
}
js_import upstream from conf.d/upstream.js;
js_set $upstream upstream.set;
server {
listen 80;
location / {
proxy_pass http://$upstream/;
}
}
upstream.js
export default { set }
function set(r) {
var fs = require('fs');
try {
var c = fs.readFileSync("/etc/nginx/conf.d/active.txt");
} catch (e) {
r.error("Error while reading upstrem file.");
// maybe set c to somehting default then.
}
return c;
}
active.txt
blue
Note: Make sure creating the file without a new-line at the end like echo -n "blue" > active.txt.
You can now chnage the content of active.txt during runtime and the upstream will be configured dynamically. With this solution you can even check for request headers and if you want to test an inactive upstream this will work as well. Pretty flexible though.
There's a pattern for /etc/nginx where you have a master nginx.conf file that loads all of the config files in another directory, like "active_services".
Your actual config files are stored in "available_services", and symlinked into the active_services directory.
Either flip the link, or delete one and create the other.

websockets in openresty proxy

I created proxy with MFA using OpenResty, it mainly works ok.
But I have problem with websockets: Firefox says that it "cannot connect with server wss://...". Looking in browser's network panel I can see switching protocols request that seems be ok. My nginx.conf looks as bellow:
worker_processes auto;
env TARGET_APPLICATION_HOST;
env TARGET_APPLICATION_PORT;
env TARGET_USE_SSL;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name localhost;
location / {
resolver local=on ipv6=off valid=100s;
content_by_lua_block {
local http = require "resty.http"
local httpc = http.new()
httpc:set_timeout(500)
local ok, err = httpc:connect(
os.getenv("TARGET_APPLICATION_HOST"),
os.getenv("TARGET_APPLICATION_PORT"))
if not ok then
ngx.log(ngx.ERR, err)
return
end
if os.getenv("TARGET_USE_SSL") == "TRUE" then
-- Trigger the SSL handshake
session, err = httpc:ssl_handshake(False, server, False)
end
httpc:set_timeout(2000)
httpc:proxy_response(httpc:proxy_request())
httpc:set_keepalive()
}
}
}
}
It is simpler version of production proxy, but returns the same error with websockets. I tried to use proxy with pure nginx and it works ok with websockets, but I need capabilites of OpenResty (proxing different hosts basing of cookie value).
Is there any simple mistake in the above file or OpenResty does not have websocket abilities?
lua-resty-http is a HTTP(S) client libraty, it does not (and probably will not) support the WebSocket protocol.
There is another library for the WebSocket protocol: lua-resty-websocket. It implements both client and server, so it should be possible to write the proxy using this library.
I need capabilites of OpenResty (proxing different hosts basing of cookie value)
ngx.balancer does exactly what you need, check the example and this answer.

nginx (openresty) get current peer

there is an openresty load balancer in front of several instances of container app, the load balancer will use round robbin to route traffic to each app instance.
Is there a way I can record the paired backend server IP address into redis? the upstream is fixed, it is dynamic.
I tried to use upstream but it seems only work with fixed upstream {}, not dynamic one
docker-compose up --scale nginx_html_app=2
-- this is docker-compose.yml
nginx_html_app:
build: nginx_html_app
proxy:
build: proxy
ports:
- "9000:80"
-- this is proxy.conf
server{
listen 80;
set $upstream http://nginx_html_app
location / {
some_lua_block{
# get paired backend IP, eg: 172.18.0.3 (nginx_html_app 1)
# save to redis (know how to do this)
}
proxy_pass $upstream
}
}
Upstream IP and port are available in ngx.var.upstream_addr variable, in header_filter_by_lua and log_by_lua. But logging in the first place will make requests wait on your writes to complete, and in the second one network sockets aren't available, so you need to queue your logging and fire it in a timer.
Something like that (untested but should help you to get the idea):
app.lua - separate file, we need it so that global state will be cached:
M = {}
local queue = {}
function M.init()
ngx.timer.every(1.0, function(premature)
if premature then return end
-- push queue to redis and clear it
end)
end
function M.log()
queue[#queue+1] = ngx.var.upstream_addr
end
return M
nginx:
init_worker_by_lua_block { require('app.lua').init() }
log_by_lua_block { -- or header_filter_by_lua_block
require('app.lua').log()
}

Nginx: Setting up SSL-passthorugh

I'm trying to configure SSL-passthrough for multiple webapps using the same nginx server (nginx version: nginx/1.13.6), but when restarting the nginx server, I get an error complaining that
nginx: [emerg] "stream" directive is duplicate
The configuration I have is the following:
2 files for the ssl passthrough that look like this:
server1.conf:
stream {
upstream workers {
server 192.168.1.10:443;
server 192.168.1.11:443;
server 192.168.1.12:443;
}
server {
listen server1.com:8443;
proxy_pass workers;
}
}
and server2.conf:
stream {
upstream workers {
server 192.168.1.20:443;
server 192.168.1.21:443;
server 192.168.1.22:443;
}
server {
listen server2.com:8443;
proxy_pass workers;
}
}
If I remove one of the two files, then nginx starts correctly.
How can this be achieved?
Thanks,
Cristi
Streams work on Layer 5, and cannot read encrypted traffic (which is Layer 6 on the OSI model), and thus cannot tell apart requests hitting server1.com and server2.com unless they are pointing to different IPs.
This can be solved by one of the following solutions
Decrypt the traffic on nginx, then proxy-pass it to backend processes/wockers using HTTP.
Bind server1.com to a port that is different to server2.com.
Get an additional IP address and bind server2.com on that.
Get an additional load balancer and move server2.com there.

Tell Nginx to try another upstream on error

I'm trying to determine if it is possible to tell Nginx to choose another server for a specified upstream when the first server selected returns a specific error.
Eg:
try upstream server 0
if it returns a certain error code (eg: 503)
try upstream 1
else
return response to client
Here's a sample of what you need to do, you can read more details on this answer
upstream myservers{
#the first server is the main server
server xxx.xxx.xxx.xxx weight=999 fail_timeout=5s max_fails=1;
server xxx.xxx.xxx.xxx;
}
server {
# all config
proxy_pass http://myservers;
}

Resources