Configure NGINX to respond 204 to a percentage of incoming requests - nginx

I'd like to throttle incoming requests into an nginx route.
The current config is similar to this:
upstream up0 {
server x.x.x.x:1111;
keepalive 1024;
}
server {
location /auc {
limit_req zone=one burst=2100;
proxy_pass http://up0/auc;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
I'd like to control the number of requests I see at the upstream server. For all other requests I'd like nginx to respond with a 204 response.
Controlling by percentage of incoming requests would also work.
Thanks.

Nginx is very effective at limiting requests using limit_req_zone and limit_req.
First create a zone which has defined limits. For a global limit, the key of the zone can be static, it's also possible to use variables such as the source ip address as a key for the zone which is useful for limiting by specific ip's or just slower pages on your site. The rate can be defined in requests per second or minute.
limit_req_zone key zone=name:size rate=rate;
Next, create a rule to apply that zone to incoming requests. The location directive can be used first to apply the rule only to specific requests or it can be server wide. The burst option will queue a specified number requests that exceed the rate limit and is useful to throttle short bursts of traffic rather than return errors.
limit_req zone=name [burst=number] [nodelay];
The default response code for traffic exceeding the rate limit and not held in a burst queue is 503 (Service Unvailable). Alternate codes like 204 (No content) can be set.
limit_req_status code;
Putting all that together a valid config to limit all requests in the location block to 10 per second with a buffer to queue up to 50 requests before returning errors and return the specified 204 response could would look like:
http {
....
limit_req_zone $hostname zone=limit:20m rate=10r/s;
limit_req_status 204;
server {
...
location / {
...
limit_req zone=limit burst=50;
}
}
}
In practice it's likely the server block will be in a different file included from within the http block. I've just condensed them for clarity.
To test, either use a flood tool or set the request rate=10r/m (10 per minute) and use a browser. It's useful to check the logs and monitor the amount of rejected requests so that you are aware of any impact on your users.
Multiple limit_req_zone rules can be combined to specify loose global limits and then stricter per source ip limits. This will enable targeting of the most persistent few users before the wider user base.

Related

Nginx limit number of ip's connected to server

Can I limit number of ip's connected to server using nginx?
I want to limit number of clients that can access example.com/download at the same time
I don't want to limit connections per ip, each ip can have multiple connections to server
I searched everywhere but I couldn't find a solution
Yes you can limit specific url in nginx.
For that you need to use limit_conn_zone and limit_conn
http {
limit_conn_zone $binary_remote_addr zone=download:10m;
server {
location /download {
limit_conn download 10; #here you can set limit like 10,20,30 etc..
}
}
}

NGINX enable rate limiting only on successful requests

Is there are a way to enable rate limiting only for successful requests (i.e. HTTP status code 200)?
For example, in the following snippet from my configuration...
http {
limit_req_zone $binary_remote_addr zone=test:10m rate=2r/m;
server {
location / {
limit_req zone=test;
proxy_pass http://localhost:3000/;
...
}
...
}
...
}
...requests are successfully rate limited (up to two requests per minute).
However, as this is for a contact form which sends me emails, I do not care about rate limiting if http://localhost:3000/ returns an error as no email will have been sent.
No, there isn't.
Nginx processes HTTP request in 11 phases from reading request to sending reponse: post-read, server-rewrite, find-config, rewrite, post-rewrite, pre-access, access, post-access, try-files, content, log.
proxy_pass is in content phase while limit_req is in pre-access phase (refer ngx_http_limit_req_module.c), pre-access phase handlers is executed before content phase handlers, so limit_req handler can not check if the response code is OK.

Nginx rate limiting based on apiid/apikey

I am using a rate liming based on ip address and below example work's perfectly for that.
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
...
server {
...
location /search/ {
limit_req zone=one burst=5;
}
Now we need to implement rate limiting based on apiid/apikey which will part of http request.Each api key will have restricted number of connections and when that goes beyond the restricted number i must give 503 or some thing like that.
How to get the apikey/apid from url into a variable and set a limit for each apikey we have ?

NGinX Rate Limiting With No Burst

I am experiencing unusual behavior with rate limiting in NGinX. I have been tasked with supporting 10 requests per second and not to use the burst option. I am using the nodelay option to reject any requests over my set rate.
My config is:
..
http
{
..
limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
..
server
{
..
location /
{
limit_req zone=one nodelay;
limit_req_status 503;
..
}
}
}
The behavior I am seeing is if a request is sent before a response is received from a previous request NGinX will return a 503 error. I see this behavior with as little as 2 requests in a second.
Is there something missing from my configuration which is causing this behavior?
Is the burst option needed to service multiple requests at once?
Burst Works like a queue. No delay means the requests will not be delayed for next second. If you are not specifying a queue then you are not allowing any other simultaneous request to come in from that IP. The zone takes effect for per ip as your key is $binary_remote_addr.
You need a burst.

Setting a trace id in nginx load balancer

I'm using nginx as a load balancer in front of several upstream app servers and I want to set a trace id to use to correlate requests with the app server logs. What's the best way to do that in Nginx, is there a good 3rd party module for this?
Otherwise a pretty simple way would be to base it off of timestamp (possibly plus a random number if that's not precise enough) and set it as an extra header on the request, but the only set_header command I see in the docs is for setting a response header.
nginx 1.11.0 added the new variable $request_id which is a unique identifier, so you can do something like:
location / {
proxy_pass http://upstream;
proxy_set_header X-Request-Id $request_id;
}
See reference at http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_id
In most cases you don't need a custom module, you can simply set a
header with a combination of embedded variables of http_core_module
which is (most probably) unique. Example:
location / {
proxy_pass http://upstream;
proxy_set_header X-Request-Id $pid-$msec-$remote_addr-$request_length;
}
This would yield a request id like "31725-1406109429.299-127.0.0.1-1227"
and should be "unique enough" to serve as a trace id.
Old question, new answer suitable for nginx verions 1.3.8, 1.2.5 and above.
You can use a combination of $connection and $connection_requests now.
Just define your own variable in the server block:
server {
...
set $trace_id $connection-$connection_requests;
...
}
This id is going to be unique across nginx unless the server gets restarted.
$connection - The connection serial number. This is a unique number
assigned by nginx to each connection. If multiple requests are
received on a single connection, they will all have the same
connection serial number. Serial numbers reset when the master nginx
process is terminated, so they will not be unique over long periods of
time.
$connection_requests - The number of requests made through this
$connection.
Then, in your location block, set the actual trace ID:
location / {
...
proxy_set_header X-Request-Id $trace_id;
...
}
Bonus: Make $trace_id unique even after server restarts:
set $trace_id $connection-$connection_requests-$msec;
$msec - The current unix timestamp in seconds, with millisecond resolution (float).
In our production environment we do have a custom module like this. It can generate a unique trace id and then it will be pushed into http headers which send to the upstream server. The upstream server will check if the certain field is set, it will get the value and write it to access_log, thus, we can trace the request.
And I find an 3rd party module just looks the same: nginx-operationid, hope it is helpful.

Resources