I Tested NGINX Config with API Access ( With Method Except )
# NGINX IP Is 10.250.11.16
log_format 20004 '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'"$bytes_sent" "$request_length" "$request_time" '
'"$gzip_ratio" $server_protocol ';
server {
satisfy all;
listen 20004;
status_zone server_APIAccess_MethodDeny_NGINXapi_20004;
include /etc/nginx/api_conf.d/api_error.conf;
error_page 404 = #400; # Invalid paths are treated as bad requests
proxy_intercept_errors on; # Do not send backend errors to the client
default_type application/json; # If no content-type then assume JSON
location / {
limit_except GET {
allow 10.250.11.16/32;
allow 10.250.20.137/32;
deny all;
}
add_header X-IP "$remote_addr" always; # Tested Header
add_header X-Method "$request" always; # Tested Header
access_log /var/log/nginx/access.log 20004;
error_log /var/log/nginx/debug.log debug;
proxy_pass http://10.250.11.11/api/7/nginx;
}
}
All IP Is Allow GET Method
( ex : Client IP - 10.250.11.12 / Request URL : curl -X GET 10.250.11.16:20004 )
Access Log IP showing 10.250.11.12 Client IP and 200 Return Code Checked
It's working as expected. According to nginx docs, limit_except GET will limit all methods except GET and HEAD, so it's expected that all ips would work.
I'm trying to reverse proxy to another endpoint on /alertmanager but it fails to connect. Weirdly enough I'm able to connect the endpoint directly from inside the pod running nginx.
A quick overview of my application architecture is this:
nginx ingress on cluster-> nginx load balancer -> <many services on different endpoints>
This is a minimized nginx configuration that replicates the issue:
worker_processes 5; ## Default: 1
error_log /dev/stderr;
pid /tmp/nginx.pid;
worker_rlimit_nofile 8192;
events {}
http {
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stderr main;
sendfile on;
tcp_nopush on;
resolver kube-dns.kube-system.svc.cluster.local;
server {
listen 8080;
proxy_set_header X-Scope-OrgID 0;
location = / {
return 200 'OK';
auth_basic off;
}
location /alertmanager {
proxy_pass http://mimir-distributed-alertmanager.mimir.svc.cluster.local:8080$request_uri;
}
}
}
I'm able to curl to the mimir endpoint in /alertmanager but I can't reach /alertmanager without getting a 404 error but I can get to / and if I put the proxy_pass inside of / it does work.
Example of what I'm seeing:
/ $ curl localhost:8080/
OK
/ $ curl localhost:8080/alertmanager
the Alertmanager is not configured
Curling http://mimir-distributed-alertmanager.mimir.svc.cluster.local does infact return the html of the page I'm expecting
I want to copy requests to another backend with ngx_http_mirror_module.
This is my nginx.conf.
Nginx version is 1.19.10
worker_processes 1;
error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 8888;
server_name localhost;
location / {
proxy_pass http://127.0.0.1:8080;
mirror /mirror;
}
location /mirror {
internal;
proxy_pass http://127.0.0.1:18080$request_uri;
}
}
}
My Spring applications listen on 8080 and 18080.
The problem is that when the backend server which handles the mirrored request returns a large body response, the backend server throws ClientAbortException because of connection reset by peer.
Nothing is recorded in the nginx error log.
The nginx access log records status 200 for the mirrored request.
Problems tend to occur when the response size is 4k bytes or larger.
Increasing proxy_buffer_size may solve the problem, but if the response size is large (8k bytes or more?), Even if it is smaller than proxy_buffer_size, problems will occur.
I tried to change subrequest_output_buffer_size, but nothing changed.
How can I stop the error?
I'm new to nginx and trying to achieve the following behaviour: I want to pass an header to nginx and pass the request to another server based on the given header. I want to load my map from a file so i'm using the following code
worker_processes 1;
error_log logs/error.log;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
map $http_dest $new_server{
include port_to_address.map;
default google.com;
}
server {
listen 200;
server_name localhost;
access_log logs/host.access.log main;
location / {
proxy_pass http://$new_server;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
with the port_to_address.map in the following format
value_1 yahoo.com;
value_2 neflix.com;
With This configuration nginx starts normally but when i pass him an header that exists in the file he returns the following error
2021/04/18 21:52:49 [error] 17352#20504: *3 no resolver defined to resolve yahoo.com, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost:200"
When i'm using the ip address and not the server's name it works perfectly. I also got another nginx thats round-robin requests between nodes it reads from a file (with upstream) and there i doesn't get any exception although im using the server's name and not their ip.
I read about using resolver but it doesn't work and i prefer to avoid it. Is there any other way to make it work without using the resolver or the servers ip? (I dont mind changing the map file structure if needed)
I mean add an upstream but not a server in an upstream.
That means I don't have an upstream block like:
upstream backend {
# ...
}
I want create an upstream block dynamically. That is something like:
content_by_lua_block {
upstream_block.add('backend');
upstream_block.add_server('backend', '127.0.0.1', 8080);
upstream_block.add_server('backend', '127.0.0.1', 8081);
upstream_block.add_server('backend', '127.0.0.1', 8082);
upstream_block.del_server('backend', '127.0.0.1', 8080);
}
proxy_pass http://backend
You may use balancer_by_lua* and https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/balancer.md
You will have a full control which upstream is selected for given request.
You may self provision you code or use existing upstream config as the source using https://github.com/openresty/lua-upstream-nginx-module
I found a nginx module called ngx_http_dyups_module matches my question.
My example on how to dynamically add upstream servers based on CPU count.
Servers. I used openresty and configured it to listen on multiple ports.
worker_processes auto;
error_log logs/openresty.err ;
events {
worker_connections 1000;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/openresty.log main;
server {
listen 127.0.0.1:8080;
listen 127.0.0.1:8081;
listen 127.0.0.1:8082;
listen 127.0.0.1:8083;
listen 127.0.0.1:8084;
listen 127.0.0.1:8085;
listen 127.0.0.1:8086;
listen 127.0.0.1:8087;
listen 127.0.0.1:8088;
listen 127.0.0.1:8089;
listen 127.0.0.1:8090;
server_name *.*;
location / {
content_by_lua_block {
--[[ local NumCores = tonumber(os.getenv("NUMBER_OF_PROCESSORS"))
local NumCores=10
]]
--
-- local f = io.popen("ps -ef | grep nginx | wc -l ")
local f = io.popen("/usr/sbin/sysctl -n hw.ncpu ")
ngx.print('CPU count: '..f:read())
f:close()
}
}
}
}
And the reverse proxy, dynamically add upstream servers based on CPU count.
error_log logs/reverse_openresty.err ;
events {
worker_connections 1000;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/reverse_openresty.log main;
upstream backend {
server 0.0.0.1; # just an invalid address as a place holder
balancer_by_lua_block {
local balancer = require "ngx.balancer"
local start_port=8080
local f = io.popen("/usr/sbin/sysctl -n hw.ncpu ") -- get cpu count
local cpu_count=tonumber(f:read())
f:close()
local max_port=start_port+cpu_count-2
repeat
local ok, err = balancer.set_current_peer('127.0.0.1', start_port)
if not ok then
ngx.log(ngx.ERR, "failed to set the current peer: ", err)
return ngx.exit(500)
end
start_port=start_port+1
until start_port>max_port
}
keepalive 10; # connection pool
}
server {
listen 80;
location / {
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_pass http://backend; # force using http. as node server.js only have http
}
}
}
The configuration is tested on MacOs.
I'm using ngx_dynamic_upstream. it's really good at production. i'd forked original from owner and checked source codes for just in case.