NGINX limit_except is not woking - nginx

I Tested NGINX Config with API Access ( With Method Except )
# NGINX IP Is 10.250.11.16
log_format 20004 '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'"$bytes_sent" "$request_length" "$request_time" '
'"$gzip_ratio" $server_protocol ';
server {
satisfy all;
listen 20004;
status_zone server_APIAccess_MethodDeny_NGINXapi_20004;
include /etc/nginx/api_conf.d/api_error.conf;
error_page 404 = #400; # Invalid paths are treated as bad requests
proxy_intercept_errors on; # Do not send backend errors to the client
default_type application/json; # If no content-type then assume JSON
location / {
limit_except GET {
allow 10.250.11.16/32;
allow 10.250.20.137/32;
deny all;
}
add_header X-IP "$remote_addr" always; # Tested Header
add_header X-Method "$request" always; # Tested Header
access_log /var/log/nginx/access.log 20004;
error_log /var/log/nginx/debug.log debug;
proxy_pass http://10.250.11.11/api/7/nginx;
}
}
All IP Is Allow GET Method
( ex : Client IP - 10.250.11.12 / Request URL : curl -X GET 10.250.11.16:20004 )
Access Log IP showing 10.250.11.12 Client IP and 200 Return Code Checked

It's working as expected. According to nginx docs, limit_except GET will limit all methods except GET and HEAD, so it's expected that all ips would work.

Related

NGINX Reverse Proxy Fails with 404 despite being able to curl endpoint

I'm trying to reverse proxy to another endpoint on /alertmanager but it fails to connect. Weirdly enough I'm able to connect the endpoint directly from inside the pod running nginx.
A quick overview of my application architecture is this:
nginx ingress on cluster-> nginx load balancer -> <many services on different endpoints>
This is a minimized nginx configuration that replicates the issue:
worker_processes 5; ## Default: 1
error_log /dev/stderr;
pid /tmp/nginx.pid;
worker_rlimit_nofile 8192;
events {}
http {
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stderr main;
sendfile on;
tcp_nopush on;
resolver kube-dns.kube-system.svc.cluster.local;
server {
listen 8080;
proxy_set_header X-Scope-OrgID 0;
location = / {
return 200 'OK';
auth_basic off;
}
location /alertmanager {
proxy_pass http://mimir-distributed-alertmanager.mimir.svc.cluster.local:8080$request_uri;
}
}
}
I'm able to curl to the mimir endpoint in /alertmanager but I can't reach /alertmanager without getting a 404 error but I can get to / and if I put the proxy_pass inside of / it does work.
Example of what I'm seeing:
/ $ curl localhost:8080/
OK
/ $ curl localhost:8080/alertmanager
the Alertmanager is not configured
Curling http://mimir-distributed-alertmanager.mimir.svc.cluster.local does infact return the html of the page I'm expecting

'connection reset by peer' for large response body by nginx mirror module

I want to copy requests to another backend with ngx_http_mirror_module.
This is my nginx.conf.
Nginx version is 1.19.10
worker_processes 1;
error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 8888;
server_name localhost;
location / {
proxy_pass http://127.0.0.1:8080;
mirror /mirror;
}
location /mirror {
internal;
proxy_pass http://127.0.0.1:18080$request_uri;
}
}
}
My Spring applications listen on 8080 and 18080.
The problem is that when the backend server which handles the mirrored request returns a large body response, the backend server throws ClientAbortException because of connection reset by peer.
Nothing is recorded in the nginx error log.
The nginx access log records status 200 for the mirrored request.
Problems tend to occur when the response size is 4k bytes or larger.
Increasing proxy_buffer_size may solve the problem, but if the response size is large (8k bytes or more?), Even if it is smaller than proxy_buffer_size, problems will occur.
I tried to change subrequest_output_buffer_size, but nothing changed.
How can I stop the error?

Nginx can't resolve server names when loading them from a file

I'm new to nginx and trying to achieve the following behaviour: I want to pass an header to nginx and pass the request to another server based on the given header. I want to load my map from a file so i'm using the following code
worker_processes 1;
error_log logs/error.log;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
map $http_dest $new_server{
include port_to_address.map;
default google.com;
}
server {
listen 200;
server_name localhost;
access_log logs/host.access.log main;
location / {
proxy_pass http://$new_server;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
with the port_to_address.map in the following format
value_1 yahoo.com;
value_2 neflix.com;
With This configuration nginx starts normally but when i pass him an header that exists in the file he returns the following error
2021/04/18 21:52:49 [error] 17352#20504: *3 no resolver defined to resolve yahoo.com, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost:200"
When i'm using the ip address and not the server's name it works perfectly. I also got another nginx thats round-robin requests between nodes it reads from a file (with upstream) and there i doesn't get any exception although im using the server's name and not their ip.
I read about using resolver but it doesn't work and i prefer to avoid it. Is there any other way to make it work without using the resolver or the servers ip? (I dont mind changing the map file structure if needed)

OpenResty - No response when making a http call with lua

when making a request to an url from within my nginx the response of this request is always not present. It seems that the request is not sended.
This is my nginx.conf
worker_processes 1;
error_log logs/error.log;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" "$http_clientid"';
access_log logs/access.log main;
server {
listen 8080;
location /token-test {
access_log logs/token.log main;
content_by_lua_block {
local cjson = require "cjson"
local http = require "resty.http"
local httpc = http.new()
local ngx = ngx
local res, err = httpc:request_uri("http://www.google.de", {method="GET"})
if not res then
ngx.say(cjson.encode({message = "Error getting response",status = ngx.HTTP_INTERNAL_SERVER_ERROR }))
return ngx.HTTP_INTERNAL_SERVER_ERROR
end
return ngx.HTTP_OK
}
}
}
}
Im getting a 200 response status with this response body:
{
"status": 500,
"message": "Error getting response"
}
There are not error in the logs.
Why do I get a 200 response instead of 500 and and why does the response body contain the message from not present condition block?
UPDATE
I logged the err:
no resolver defined to resolve \"www.google.de\"
Thanks Danny
no resolver defined to resolve "www.google.de"
You need to configure the resolver:
https://github.com/openresty/lua-nginx-module#tcpsockconnect
In case of domain names, this method will use Nginx core's dynamic resolver to parse the domain name without blocking and it is required to configure the resolver directive in the nginx.conf file like this:
resolver 8.8.8.8; # use Google's public DNS nameserver
Why do I get a 200 response instead of 500
You should call ngx.exit to interrupt execution of the request and return the status code.
Replace
return ngx.HTTP_INTERNAL_SERVER_ERROR
with
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)

Testing load balancing in NGINX

I set up load balancing on NGINX using the Round Robin for apache tomcat servers with two servers in my proxy.conf file:
upstream appcluster1 {
server IP_ADDRESS:8000;
server IP_ADDRESS:8001;
}
server {
location / {
proxy_pass http://appcluster1;
}
}
This is deployed on the cloud and I am able to hit the endpoint using this method successfully. However, I want to test and see if nginx redirects between the two servers. How would I go about this?
I tried this method but I do not see anything in the logs that shows what server it is hitting. Is there any other way I can test and see if nginx would go to the second server?
EDIT: I have another file called nginx.conf that looks like this:
load_module modules/ngx_http_js_module.so;
user nginx;
worker_processes auto;
events {
worker_connections 2048;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
js_include auth.js;
proxy_buffering off;
log_format upstreamlog '$server_name to: $upstream_addr {$request} '
'upstream_response_time $upstream_response_time'
' request_time $request_time';
# log_format main '$remote_addr - $remote_user [$time_local] $status '
# '"$request" $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
# access_log logs/access.log main;
# sendfile on;
#tcp_nopush on;
keepalive_timeout 65s;
proxy_connect_timeout 120s;
keepalive_requests 50;
include /etc/nginx/conf.d/*.conf;
}

Resources