How to dynamically add an upstream in Nginx? - nginx

I mean add an upstream but not a server in an upstream.
That means I don't have an upstream block like:
upstream backend {
# ...
}
I want create an upstream block dynamically. That is something like:
content_by_lua_block {
upstream_block.add('backend');
upstream_block.add_server('backend', '127.0.0.1', 8080);
upstream_block.add_server('backend', '127.0.0.1', 8081);
upstream_block.add_server('backend', '127.0.0.1', 8082);
upstream_block.del_server('backend', '127.0.0.1', 8080);
}
proxy_pass http://backend

You may use balancer_by_lua* and https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/balancer.md
You will have a full control which upstream is selected for given request.
You may self provision you code or use existing upstream config as the source using https://github.com/openresty/lua-upstream-nginx-module

I found a nginx module called ngx_http_dyups_module matches my question.

My example on how to dynamically add upstream servers based on CPU count.
Servers. I used openresty and configured it to listen on multiple ports.
worker_processes auto;
error_log logs/openresty.err ;
events {
worker_connections 1000;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/openresty.log main;
server {
listen 127.0.0.1:8080;
listen 127.0.0.1:8081;
listen 127.0.0.1:8082;
listen 127.0.0.1:8083;
listen 127.0.0.1:8084;
listen 127.0.0.1:8085;
listen 127.0.0.1:8086;
listen 127.0.0.1:8087;
listen 127.0.0.1:8088;
listen 127.0.0.1:8089;
listen 127.0.0.1:8090;
server_name *.*;
location / {
content_by_lua_block {
--[[ local NumCores = tonumber(os.getenv("NUMBER_OF_PROCESSORS"))
local NumCores=10
]]
--
-- local f = io.popen("ps -ef | grep nginx | wc -l ")
local f = io.popen("/usr/sbin/sysctl -n hw.ncpu ")
ngx.print('CPU count: '..f:read())
f:close()
}
}
}
}
And the reverse proxy, dynamically add upstream servers based on CPU count.
error_log logs/reverse_openresty.err ;
events {
worker_connections 1000;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/reverse_openresty.log main;
upstream backend {
server 0.0.0.1; # just an invalid address as a place holder
balancer_by_lua_block {
local balancer = require "ngx.balancer"
local start_port=8080
local f = io.popen("/usr/sbin/sysctl -n hw.ncpu ") -- get cpu count
local cpu_count=tonumber(f:read())
f:close()
local max_port=start_port+cpu_count-2
repeat
local ok, err = balancer.set_current_peer('127.0.0.1', start_port)
if not ok then
ngx.log(ngx.ERR, "failed to set the current peer: ", err)
return ngx.exit(500)
end
start_port=start_port+1
until start_port>max_port
}
keepalive 10; # connection pool
}
server {
listen 80;
location / {
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_pass http://backend; # force using http. as node server.js only have http
}
}
}
The configuration is tested on MacOs.

I'm using ngx_dynamic_upstream. it's really good at production. i'd forked original from owner and checked source codes for just in case.

Related

NGINX Reverse Proxy Fails with 404 despite being able to curl endpoint

I'm trying to reverse proxy to another endpoint on /alertmanager but it fails to connect. Weirdly enough I'm able to connect the endpoint directly from inside the pod running nginx.
A quick overview of my application architecture is this:
nginx ingress on cluster-> nginx load balancer -> <many services on different endpoints>
This is a minimized nginx configuration that replicates the issue:
worker_processes 5; ## Default: 1
error_log /dev/stderr;
pid /tmp/nginx.pid;
worker_rlimit_nofile 8192;
events {}
http {
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stderr main;
sendfile on;
tcp_nopush on;
resolver kube-dns.kube-system.svc.cluster.local;
server {
listen 8080;
proxy_set_header X-Scope-OrgID 0;
location = / {
return 200 'OK';
auth_basic off;
}
location /alertmanager {
proxy_pass http://mimir-distributed-alertmanager.mimir.svc.cluster.local:8080$request_uri;
}
}
}
I'm able to curl to the mimir endpoint in /alertmanager but I can't reach /alertmanager without getting a 404 error but I can get to / and if I put the proxy_pass inside of / it does work.
Example of what I'm seeing:
/ $ curl localhost:8080/
OK
/ $ curl localhost:8080/alertmanager
the Alertmanager is not configured
Curling http://mimir-distributed-alertmanager.mimir.svc.cluster.local does infact return the html of the page I'm expecting

I used nginx rewrite to redirect from old URL to new URL, now curl GET and POST requests to old URL don't work

I have a service running on https://old-server.net:8444/devs/. I set up a new service on a new server https://new-server.net/. When accessing the new service via the web, things work as expected. But when trying to login to the old service via curl (A POST request) or download from it (GET request), I just get the "301 Moved Permanently" message. Here is my nginx.conf:
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
server {
listen 80;
server_name new-server.net;
server_tokens off;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name new-server.net;
server_tokens off;
ssl_certificate /etc/nginx/certs/server.cer;
ssl_certificate_key /etc/nginx/certs/server.key;
location / {
proxy_pass http://new-server.net:8081/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https ;
proxy_redirect http:// https:// ;
}
}
server {
listen 8444 ssl;
server_name old-server.net;
server_tokens off;
ssl_certificate /etc/nginx/certs/server.cer;
ssl_certificate_key /etc/nginx/certs/server.key;
location /devs {
rewrite ^/devs(.*) https://new-server.net$1 permanent;
}
}
}
I'm using rewrite because the new server doesn't have the /devs/ context path of the old server. I wasn't sure how to achieve this with a 'return 301' line. So, is it possible for me to allow devs to continue to GET and POST to the old URL and have those requests sent to the new URL?

How to configure Nginx traffic to go through another proxy

I have nginx configured to act as a reverse proxy
http {
log_format combined '$proxy_protocol_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
#...
server {
server_name localhost;
listen 80 proxy_protocol;
listen 443 ssl proxy_protocol;
ssl_certificate /etc/nginx/ssl/public.example.com.pem;
ssl_certificate_key /etc/nginx/ssl/public.example.com.key;
location /app/ {
proxy_pass http://backend1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $proxy_protocol_addr;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
}
}
Something very similar to the example above.
But i need that all the traffic that is going to backend server will go through another proxy.
Meaning:
Client request -> Nginx (as reverese proxy) --- all traffic---> Proxy server -> backend server
Is it possible?

Nginix proxy server error: "upstream server temporarily disabled while connecting to upstream"

I'm getting an error when using the docker image for setting up an nginx proxy server: nginx-proxy. If I hit and point on my site the response is incredibly slow to come back in some instances. This happens pretty much immediately, if I hit an endpoint three times, for example, in relatively quick succession. The log for nginx shows the following error:
2017/05/14 09:24:26 [warn] 26#26: *29 upstream server temporarily
disabled while connecting to upstream, client: 10.255.0.2, server: [ip
removed], request: "GET
/documents/5918206a-8da0-4deb-86b2-6b627867e0d5 HTTP/1.1", upstream:
"http://10.255.0.4:8080/documents/5918206a-8da0-4deb-86b2-6b627867e0d5",
host: "[ip removed]"
The log for my back end service doesn't show any errors, so I'm not sure what may be going on. I am guessing it is a configuration issue with nginx, which could be fixed by changing the settings, but I am not sure where to start. Does anyone have any ideas?
My configuration looks like this in the end when the docker instance runs:
nginx.conf:
# cat nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server_names_hash_bucket_size 128;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
conf.d/default.conf:
daemon off;
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
upstream [ip removed] {
## Can be connect with "ingress" network
# datemo_datemo.1.dean8edsp7ytoevagjnemb8bb
server 10.255.0.6:8080;
## Can be connect with "datemo_default" network
# datemo_datemo.1.dean8edsp7ytoevagjnemb8bb
server 10.0.0.5:8080;
}
server {
server_name [ip removed];
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://[ip removed];
}
}

Logging a variable set by nginx's Lua module

I am trying to use the Lua module in nginx to set a variable ("foo") based on JSON in the body of a request. Then I want to log the value of that variable to the access log.
Like so:
http {
log_format mylogfmt '$remote_addr - $remote_user [$time_local] \
"$request" $status $body_bytes_sent "$http_referer" \
"$http_user_agent" "$foo"'
}
location / {
proxy_pass http://remote-server.example.com/;
proxy_set_header X-Real-IP $remote_addr;
proxy_connect_timeout 150;
proxy_send_timeout 100;
proxy_read_timeout 100;
proxy_buffers 4 32k;
client_max_body_size 8m;
client_body_buffer_size 128k;
rewrite_by_lua '
cjson = require "cjson"
ngx.req.read_body()
body_table = cjson.decode(ngx.var.request_body)
ngx.var.foo = body_table["foo"]
';
access_log /var/log/nginx/access.log mylogfmt;
}
However, nginx won't start with this configuration. It complains thusly:
danslimmon#whatever:~$ sudo /etc/init.d/nginx reload
Reloading nginx configuration: nginx: [emerg] unknown "foo" variable
nginx: configuration file /etc/nginx/nginx.conf test failed
I tried adding a 'set $foo "-"' to the location, but that just seems to override what I'm doing in Lua.
Thoughts?
My nginx -V output
You need to define the variable $foo before the Lua module can use it. Check the doc for an example defining the variable within the location directive before utilizing it.
So as the link above do not work anymore, do this:
server {
(...)
map $host $foo {
default '';
}
(rest of your code)
}
This because you can not use set inside the server block and this is the best way to define the variable to all the vhosts. geoip could also be a good option

Resources