I'm trying to determine if it is possible to tell Nginx to choose another server for a specified upstream when the first server selected returns a specific error.
Eg:
try upstream server 0
if it returns a certain error code (eg: 503)
try upstream 1
else
return response to client
Here's a sample of what you need to do, you can read more details on this answer
upstream myservers{
#the first server is the main server
server xxx.xxx.xxx.xxx weight=999 fail_timeout=5s max_fails=1;
server xxx.xxx.xxx.xxx;
}
server {
# all config
proxy_pass http://myservers;
}
Related
I have a group of upstream servers,I have been trying to print the upstream URL in the access logs where the request went to. I tried proxy_host and upstream_addr. But they didn't solve my issue. "proxy_host" prints the upstream name, not the URL, and upstream_addr prints the IP address. Is there a way to print the URL in the access logs?
upstream backend {
server backend1.example.com:8080 weight=90;
server backend2.example.com:8080 weight=10
}
location /test
set $foo backend;
proxy_pass https://$foo
with proxy_host, the access log prints "proxy_host=backend". If I use "upstream_addr", it prints the IP "upstream_addr=10.1.0.0".
Is there a way to print, If the request is routed to backend1.example.com or backend2.example.com?
Maybe it's uncommon but i'd love to use an upstream definition in my nginx loadbalancer, which looks like this:
upstream backend {
server primary.local.net:80;
server backup.local.net:80 backup;
}
to aid maintenance processes for those hosts. First i prepare backup.local.net with newest software, then switch over the service to backup and do the same with primary.local.net. In the end, again switch back to primary.
Right now i'm doing this by loading a second configuration file:
upstream backend {
server primary.local.net:80 backup;
server backup.local.net:80;
}
using the command:
nginx -s reload
But this is laborious and hope there is a much smarter way to do this?
First of all, using upstream definitions in NGINX should NOT be uncommon! It's the preferred way of doing it.
Unfortunately, there is not really an easy solution for NGINX OpenSource. But why not trying to build something that does not require any config reload.
So given we have two upstream defitions like mentioned above
upstream blue{
server primary.local.net:80;
server backup.local.net:80 backup;
}
upstream green{
server primary.local.net:80;
server backup.local.net:80 backup;
}
Blue is primary and green is secondary. If you are saying you prepare something, do you think it would be possible to have something on your backend telling NGINX what deployment is currently active. Blue or Green?
Another option could be a file on your NGINX instance keeping that information. njs will be able to read from that file and define the upstream to be used based on the information provided.
https://nginx.org/en/docs/njs/reference.html#njs_api_fs
Quick POC:
upstream.conf
upstream blue {
server 127.1:9000;
server 127.1:9100 backup;
}
upstream green {
server 127.1:9000 backup;
server 127.1:9100;
}
js_import upstream from conf.d/upstream.js;
js_set $upstream upstream.set;
server {
listen 80;
location / {
proxy_pass http://$upstream/;
}
}
upstream.js
export default { set }
function set(r) {
var fs = require('fs');
try {
var c = fs.readFileSync("/etc/nginx/conf.d/active.txt");
} catch (e) {
r.error("Error while reading upstrem file.");
// maybe set c to somehting default then.
}
return c;
}
active.txt
blue
Note: Make sure creating the file without a new-line at the end like echo -n "blue" > active.txt.
You can now chnage the content of active.txt during runtime and the upstream will be configured dynamically. With this solution you can even check for request headers and if you want to test an inactive upstream this will work as well. Pretty flexible though.
There's a pattern for /etc/nginx where you have a master nginx.conf file that loads all of the config files in another directory, like "active_services".
Your actual config files are stored in "available_services", and symlinked into the active_services directory.
Either flip the link, or delete one and create the other.
I'm trying to configure SSL-passthrough for multiple webapps using the same nginx server (nginx version: nginx/1.13.6), but when restarting the nginx server, I get an error complaining that
nginx: [emerg] "stream" directive is duplicate
The configuration I have is the following:
2 files for the ssl passthrough that look like this:
server1.conf:
stream {
upstream workers {
server 192.168.1.10:443;
server 192.168.1.11:443;
server 192.168.1.12:443;
}
server {
listen server1.com:8443;
proxy_pass workers;
}
}
and server2.conf:
stream {
upstream workers {
server 192.168.1.20:443;
server 192.168.1.21:443;
server 192.168.1.22:443;
}
server {
listen server2.com:8443;
proxy_pass workers;
}
}
If I remove one of the two files, then nginx starts correctly.
How can this be achieved?
Thanks,
Cristi
Streams work on Layer 5, and cannot read encrypted traffic (which is Layer 6 on the OSI model), and thus cannot tell apart requests hitting server1.com and server2.com unless they are pointing to different IPs.
This can be solved by one of the following solutions
Decrypt the traffic on nginx, then proxy-pass it to backend processes/wockers using HTTP.
Bind server1.com to a port that is different to server2.com.
Get an additional IP address and bind server2.com on that.
Get an additional load balancer and move server2.com there.
I'm looking for a way to specify a port range in the nginx upstream block.
Is there a way to turn this:
upstream backend {
least_conn;
server 127.0.0.1:3000;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
server 127.0.0.1:3004;
server 127.0.0.1:3005;
}
into something like this?:
upstream backend {
least_conn;
server 127.0.0.1:[3000:3005]
}
One way to approach to operate host with Openresty which is based on Nginx and is capable of running Lua plugins. The snippet of code to make it work would look like this
upstream backend {
balancer_by_lua_block {
local balancer = require "ngx.balancer"
local start_port=3000
local max_port=start_port+5
repeat
local ok, err = balancer.set_current_peer('127.0.0.1', start_port)
if not ok then
ngx.log(ngx.ERR, "failed to set the current peer: ", err)
return ngx.exit(500)
end
start_port=start_port+1
until start_port>max_port
}
I'm asking myself if it possible to reproduce NGinx proxy_next_upstream system on F5 BIG-IP.
As a reminder, here is how it works on NGinx:
Given a pool of upstream servers let's call it webservers compose by 2 instances:
upstream webservers {
server 192.168.1.10:8080 max_fails=1 fail_timeout=10s;
server 192.168.1.20:8080 max_fails=1 fail_timeout=10s;
}
With the following instruction (proxy_next_upstream error), if a tcp connection fail on first instance when routing a request (because instance is down for example), NGinx automatically forward request to the second instance (USER DOESN'T SEE ANY ERROR).
Furthermore, instance 1 is blacklisted for 10 seconds (fail_timeout=10s).
Every 10 sec, NGinx will try to route 1 request to instance 1 (to know if instance is coming back) and make the instance available again if it succeed otherwise it wait again 10 sec to try.
location / {
proxy_next_upstream error;
proxy_pass http://webservers/$1;
}
I hope I'm clear enough...
Thanks for your help.
Here is something interesting: https://support.f5.com/kb/en-us/solutions/public/10000/600/sol10640.html