I want to block excessive requests on a per-IP basis, allowing at maxium 12 requests per second. For this sake, I have the following in /etc/nginx/nginx.conf:
http {
##
# Basic Settings
##
...
limit_req_zone $binary_remote_addr zone=perip:10m rate=12r/s;
server {
listen 80;
location / {
limit_req zone=perip nodelay;
}
}
Now, when I'm running the command ab -k -c 100 -n 900 'https://www.mywebsite.com/' in the terminal, I get the output with only 179 non-2xx responses out of 900.
Why isn't Nginx blocking most requests?
Taken from here:
with nodelay nginx process all burst requests instantly and without
this option nginx makes excessive requests to wait so that overall
rate would be no more than 1 request per second and last successful
request took 5 seconds to complete.
rate=6r/s actually means one request in 1/6th of a second. So if
you send 6 request simultaneously you'll get 5 of them with 503
The problem might be in your testing method (or your expectations from it).
Related
I am running an nginx-ingress controller in a kubernetes cluster and one of my log statements for the request looks like this:
upstream_response_length: 0, 840
upstream_response_time: 60.000, 0.760
upstream_status: 504, 200
I cannot quite understand what does that mean? Nginx has a response timeout equal to 60 seconds, and tries to request one more time after that (successfully) and logs both requests?
P.S. Config for log format:
log-format-upstream: >-
{
...
"upstream_status": "$upstream_status",
"upstream_response_length": "$upstream_response_length",
"upstream_response_time": "$upstream_response_time",
...
}
According to split_upstream_var method of ingress-nginx, it splits results of nginx health checks.
Since nginx can have several upstreams, your log could be interpreted this way:
First upstream is dead (504)
upstream_response_length: 0 // responce from dead upstream has zero length
upstream_response_time: 60.000 // nginx dropped connection after 60sec
upstream_status: 504 // responce code, upstream doesn't answer
Second upstream works (200)
upstream_response_length: 840 // healthy upstream returned 840b
upstream_response_time: 0.760 // healthy upstream responced in 0.760
upstream_status: 200 // responce code, upstream is ok
P.S. JFYI, here's a cool HTTP headers state diagram
I am learning nginx httprequestlimitmodule. i am not getting the concept of nodelay in httprequestmodule. i have tried below two configuration with nodelay and without nodelay. in both cases with nodelay and without nodelay i am hitting 10 request in 1 seconds and getting 503 temporary service unavailable error for 6 requests and 4 requests are successful. my question is if the result is same with nodelay and without nodelay then what is the use of nodelay option here.
limit_req_zone $binary_remote_addr zone=one:10m rate=2r/s;
limit_req zone=one burst=2 nodelay;
limit_req_zone $binary_remote_addr zone=one:10m rate=2r/s;
limit_req zone=one burst=2 ;
Let's take this config:
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
server {
listen 127.0.0.1:81;
location / {
limit_req zone=one burst=5;
echo 'OK';
}
location /nodelay {
limit_req zone=one burst=5 nodelay;
echo 'OK';
}
}
and test it with nodelay
$ siege -q -b -r 1 -c 10 http://127.0.0.1:81/nodelay
done.
Transactions: 6 hits
Availability: 60.00 %
Elapsed time: 0.01 secs
Data transferred: 0.00 MB
Response time: 0.00 secs
Transaction rate: 600.00 trans/sec
Throughput: 0.09 MB/sec
Concurrency: 0.00
Successful transactions: 6
Failed transactions: 4
Longest transaction: 0.00
Shortest transaction: 0.00
and without nodelay
$ siege -q -b -r 1 -c 10 http://127.0.0.1:81/
done.
Transactions: 6 hits
Availability: 60.00 %
Elapsed time: 5.00 secs
Data transferred: 0.00 MB
Response time: 2.50 secs
Transaction rate: 1.20 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 3.00
Successful transactions: 6
Failed transactions: 4
Longest transaction: 5.00
Shortest transaction: 0.00
They both passed 6 request, buy with nodelay nginx process all burst requests instantly and without this option nginx makes excessive requests to wait so that overall rate would be no more than 1 request per second and last successful request took 5 seconds to complete.
EDIT: rate=6r/s actually means one request in 1/6th of a second. So if you send 6 request simultaneously you'll get 5 of them with 503.
There is a good answer with “bucket” explanation https://serverfault.com/a/247302/211028
TL;DR: The nodelay option is useful if you want to impose a rate limit without constraining the allowed spacing between requests.
There's new documentation from Nginx with examples that answers this: https://www.nginx.com/blog/rate-limiting-nginx/
Here's the pertinent part. Given:
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;
location /login/ {
limit_req zone=mylimit burst=20;
...
}
The burst parameter defines how many requests a client can make in
excess of the rate specified by the zone (with our sample mylimit
zone, the rate limit is 10 requests per second, or 1 every 100
milliseconds). A request that arrives sooner than 100 milliseconds
after the previous one is put in a queue, and here we are setting the
queue size to 20.
That means if 21 requests arrive from a given IP address
simultaneously, NGINX forwards the first one to the upstream server
group immediately and puts the remaining 20 in the queue. It then
forwards a queued request every 100 milliseconds, and returns 503 to
the client only if an incoming request makes the number of queued
requests go over 20.
If you add nodelay:
location /login/ {
limit_req zone=mylimit burst=20 nodelay;
...
}
With the nodelay parameter, NGINX still allocates slots in the queue
according to the burst parameter and imposes the configured rate
limit, but not by spacing out the forwarding of queued requests.
Instead, when a request arrives “too soon”, NGINX forwards it
immediately as long as there is a slot available for it in the queue.
It marks that slot as “taken” and does not free it for use by another
request until the appropriate time has passed (in our example, after
100 milliseconds).
Nginx supports proxy response buffering, according to the mailing list (Nov 2013):
The proxy_buffering directive disables response buffering, not request
buffering.
As of now, there is no way to prevent request body buffering in nginx.
It's always fully read by nginx before a request is passed to an
upstream server. It's basically a part of nginx being a web
accelerator - it handles slow communication with clients by itself and
only asks a backend to process a request when everything is ready.
Since version 1.7.11 Nginx (Mar 2015) directive proxy_request_buffering see details:
Update: in nginx 1.7.11 the proxy_request_buffering directive is
available, which allows to disable buffering of a request body. It
should be used with care though, see docs.
See docs for more details:
Syntax: proxy_request_buffering on | off;
Default: proxy_request_buffering on;
Context: http, server, location
When buffering is enabled, the entire request body is read from the
client before sending the request to a proxied server.
When buffering is disabled, the request body is sent to the proxied
server immediately as it is received. In this case, the request cannot
be passed to the next server if nginx already started sending the
request body.
When HTTP/1.1 chunked transfer encoding is used to send the original
request body, the request body will be buffered regardless of the
directive value unless HTTP/1.1 is enabled for proxying.
The question is about buffering for Nginx (Load Balancer). For instance, we have the following scheme:
Nginx (LB) -> Nginx (APP) -> Backend
Nginx (APP) buffers the request from the load balancer and also buffers response from Backend. But does it make sense to buffer request and response on the Nginx (LB) side if both Nginx nodes are physically located close to each other (less than 2ms ping) with pretty fast network connection in between?
I measured the following benchmarks - Please note that this is for illustration only - Production loads would be significantly higher :
siege --concurrent=50 --reps=50 https://domain.com
proxy_request_buffering on;
Transactions: 2500 hits
Availability: 100.00 %
Elapsed time: 58.57 secs
Data transferred: 577.31 MB
Response time: 0.55 secs
Transaction rate: 42.68 trans/sec
Throughput: 9.86 MB/sec
Concurrency: 23.47
Successful transactions: 2500
Failed transactions: 0
Longest transaction: 2.12
Shortest transaction: 0.10
proxy_request_buffering off;
Transactions: 2500 hits
Availability: 100.00 %
Elapsed time: 57.80 secs
Data transferred: 577.31 MB
Response time: 0.53 secs
Transaction rate: 43.25 trans/sec
Throughput: 9.99 MB/sec
Concurrency: 22.75
Successful transactions: 2500
Failed transactions: 0
Longest transaction: 2.01
Shortest transaction: 0.09
limit_req_zone rate doesn't work as i thought. No matter how much i set it ,it still works as rate=1r/s.
my nginx.conf:
limit_req_zone $binary_remote_addr zone=lwrite:10m rate=300r/s;
...
limit_req zone=lwrite burst=5;
after reading this doc (http://nginx.org/en/docs/http/ngx_http_limit_req_module.html),I thought my nginx should only delay request when a ip visit more than 300r/s,and return 5xx when it visit more than 305/s.
however ,if I run test: ab -c 12 -n 12 '127.0.0.1:8090/index.html?_echo=abc', the output is:
Concurrency Level: 12
Time taken for tests: 0.051 seconds
Complete requests: 12
Failed requests: 6
(Connect: 0, Receive: 0, Length: 6, Exceptions: 0)
Write errors: 0
Non-2xx responses: 6
i found 5 warns and 6 errors in nginx error.log , it turns out only the first visit success immediately,the next 5 would be delay, and the last 6 returned error. So, no matter how high i set ,it still works as rate=1r/s.
why? Does anyone meet same problem with me? my nginx version is 1.5.13 and 1.7.11
I have 4 upstream blocks in my nginx config that I'm using depending on the incoming request's scheme or the geo location of the requesting client.
Every time I have to restart nginx it takes around 80 seconds to complete. If I only have 3 upstreams declared it takes about 40 seconds, and with 2 upstreams it restarts pretty much immediately, like it normally does.
Reloads take 1/2 the time (40 seconds with 4 upstreams, 20 seconds with 3 upstreams).
There are no errors logged in the nginx error log, even on debug log level & if I run /usr/sbin/nginx -t it says the test is successful, but takes as long as a reload does.
Nginx resolves ip of all upstreams at (re)start. Check your DNS.