I have a server in nginx configured and have the following code to create my rate limit zone:
limit_req_zone $key zone=six_zone:10m rate=60r/m;
In my location, I use a module to serve the requests. This location supports GET, POST and DELETE methods. I am trying to rate limit only GET requests to that location. This is what I thought might work but it does not.
location /api/ {
if ($request_method = GET) {
limit_req zone=six_zone;
}
reqfwder;
}
Any help or pointers towards how I can approach this? Thanks.
Hope this helps,
In the http context of your NGINX configuration, add these lines:
http {
... # your nginx.conf here
# Maps ip address to $limit variable if request is of type POST
map $request_method $limit {
default "";
POST $binary_remote_addr;
}
# Creates 10mb zone in memory for storing binary ips
limit_req_zone $limit zone=my_zone:10m rate=1r/s;
}
**Rate limiting for the entire NGINX process:**
http {
... # your nginx.conf here
limit_req zone=global_zone;
}
REF: https://product.reverb.com/first-line-of-defense-blocking-bad-post-requests-using-nginx-rate-limiting-507f4c6eed7b
Related
I would like to limit all the incoming traffic except for HEAD requests. We have implemented a rate limit using Nginx, it is limiting all the traffics currently. But I want to exclude the HEAD requests from the rate limit.
Here is the code snippet used for the rate limit
http {
...
limit_req_zone $binary_remote_addr zone=ratelimit:50m rate=200r/s;
limit_req_status 429
...
...
server {
limit_req zone=ratelimit burst=400 nodelay;
}
...
}
According to the limit_req_zone directive documentation:
Requests with an empty key value are not accounted.
So just made zone key an empty string in case of HEAD request method:
http {
...
map $request_method $ratelimit_key {
HEAD '';
default $binary_remote_addr;
}
limit_req_zone $ratelimit_key zone=ratelimit:50m rate=200r/s;
limit_req_status 429;
...
I want to impose a request limit for uncached content on my NGINX reverse proxy. I have multiple locations defined and content can get cached or won't get cached due to other rules. So I can not set a request limit just for a location, I have to handle this differently.
According to the documentation in https://www.nginx.com/blog/rate-limiting-nginx/#Advanced-Configuration-Examples, I can use the map feature in order to impose a request limit. So I tried this and created following configuration snippet:
map $upstream_cache_status $limit {
default 1;
MISS 1;
HIT 0;
}
map $limit $limit_key {
0 "";
1 $binary_remote_addr;
}
limit_req_zone $limit_key zone=req_zone:10m rate=5r/s;
So in order to test my map first, I have added following to my location:
add_header X-Test $limit;
And I see that it works! Every resource that is cached ($upstream_cache_status = HIT), $limit seems to be 0. Every uncached content ($upstream_cache_status = MISS), $limit is 1.
Now comes the weird behaviour. As soon as I add limit_req zone=req_zone burst=10 nodelay; into my location, $limit seems to be stuck at 1, no matter if the $upstream_cache_status is HIT or MISS.
The location looks like this:
location ~* \.(jpg|jpeg|png|gif|webp|svg|svgz|ico|pdf|doc|docx|xls|xlsx|csv|zip|gz|woff|woff2|ttf|otf|eot)$ {
limit_req zone=req_zone burst=10 nodelay;
[...]
add_header X-Test $limit;
[...]
}
Is this a NGINX bug or am I missing something here? NGINX version is 1.20.1 on AlmaLinux 8.5.
Rate limiting works first, on request phase.
Caching works later, on content phase (guess).
So, when limiter works, there is no information about cache status yet.
I have nginx rate-limiting working when using the following
limit_req_zone $binary_remote_addr zone=mylimit:20m rate=50r/m;
I now want to apply it to certain IPs so i've changed it to
geo $limit {
default 1;
1.2.3.4/32 0;
}
map $limit $mylimit {
0 "";
1 $binary_remote_addr;
}
limit_req_zone $my_limit zone=mylimit:20m rate=50r/m;
Following the example here https://www.nginx.com/blog/rate-limiting-nginx/
But the rate limit is ignored even when coming from a different IP than the one in the config
This is using nginx version: nginx/1.14.0 (Ubuntu)
In the server block I have
limit_req zone=mylimit burst=15 nodelay;
which was working before
I'm evaluating nginx to act as rate limiter for a multi tenancy REST API system. I need to limit API calls by tenant-id.
For example i want to allow 100 r/s for tenant1 and only 50 r/s for tenant2.
It can be easily achived when there are differant urls like: "me.com/tenant1/api" and "me.com/tenant2/api" (with the location directive).
But, in my case the urls are the same for all tenants "me.com/api" (I can't change this).
To find the tenant-id I need to extract a JSON attribute from the Body of the request, and then check the DB for the real tenant-id.
Is it possible to limit_req with my requirements?
Thank for the help!
I decided to build another service getTenant for parsing the body and extracting the Tenant from the DB. This service is called internally by Nginx.
I'm not sure if that is the best nginx (/openresty) solution, but this is what i came up with:
limit_req_zone t1Limit zone=t1Zone:10m rate=200r/s;
limit_req_zone t2Limit zone=t2Zone:10m rate=90r/s;
server {
location /api{
content_by_lua_block {
ngx.req.read_body();
local reqBody = ngx.req.get_body_data()
local res = ngx.location.capture("/getTenant", {method=ngx.HTTP_POST,body=reqBody});
local tenantId= res.body;
if tenantId== "none" then
ngx.log(ngx.ERR, "Tenant not found!");
ngx.say(tenantId);
else
ngx.req.set_header("x_myTenantId", tenantId)
local res2 = ngx.location.capture("/" .. tenantId .."/doApi", {method=ngx.HTTP_POST,body=reqBody});
if res2.status == ngx.HTTP_OK then
ngx.say(res2.body);
ngx.exit(res2.status);
else
ngx.status = res2.status
ngx.exit(res2.status)
end
end;
}
}
location /getTenant {
internal; #this is not accessible from outside.
proxy_pass http://UpStream1/getCustomer;
proxy_set_header X-Original-URI $request_uri;
}
location /tenant1/doApi {
internal; #this is not accessible from outside.
# Proxy all requests to the AReqUpStream server group
proxy_pass http://UpStream2/doApi;
limit_req zone=tenant1Zone burst=25;
limit_req_log_level notice;
}
location /tenant2/doApi {
internal; #this is not accessible from outside.
# Proxy all requests to the AReqUpStream server group
proxy_pass http://UpStream2/doApi;
limit_req zone=tenant2Zone burst=10 ;#nodelay;
limit_req_status 409;
limit_req_log_level notice;
}
}
Basically, when me.com/api is called, a new subrequest is issued to service /getTenant. The response of that call is used to build another subrequest call to the /tenant[X]/doApi service. That way i can define locations per tenant and provide different rate_limis to each.
Comments on that are more than welcome!
We try to save nginx resources by limiting the number of requests per second:
http {
limit_req_zone $binary_remote_addr zone=gulag:10m rate=2r/s;
server
{
location / {
proxy_pass http://0.0.0.0:8181;
limit_req zone=gulag burst=40;
}
}
}
However, most employees in our company are also heavy users of our own website. Since everyone in the company appear to come from the same ip address were getting 503 errors because nginx thinks all the traffic is coming from one user. Can we add our ip as an exception to the requests per second limit?
Yes, you can. Just a quote from the documentation:
The key is any non-empty value of the specified variable (empty values are not accounted).
So you can achieve your goal by using geo and map modules like this:
geo $limited_net {
default 1;
10.1.0.0/16 0;
}
map $limited_net $addr_to_limit {
0 "";
1 $binary_remote_addr;
}
limit_req_zone $addr_to_limit zone=gulag:10m rate=2r/s;