Can I limit number of ip's connected to server using nginx?
I want to limit number of clients that can access example.com/download at the same time
I don't want to limit connections per ip, each ip can have multiple connections to server
I searched everywhere but I couldn't find a solution
Yes you can limit specific url in nginx.
For that you need to use limit_conn_zone and limit_conn
http {
limit_conn_zone $binary_remote_addr zone=download:10m;
server {
location /download {
limit_conn download 10; #here you can set limit like 10,20,30 etc..
}
}
}
Related
I am currently trying to make a nginx proxy work where it pass to different ips depending on the origin.
stream {
server {
listen 1000 udp;
proxy_pass 10.0.0.2;
allow 10.0.0.3;
}
server {
listen 1000 udp;
proxy_pass 10.0.0.3;
allow 10.0.0.2;
}
}
obviously this does not work as I can not listen on the same port twice. I tried something with "if" but it is not allowed there. Any ideas? I just want to proxy the traffic between the two ips.
You need transparent proxy or some kind of packet filter or firewall, not nginx, since it is reverse proxy and not suitable for your task.
While I'm not sure you choose the right way to solve your task (unless you need some kind of load-balancing), however this this should be possible using several upstream blocks and the geo block:
stream {
upstream first_upstream {
server 10.0.0.2:1000;
}
upstream second_upstream {
server 10.0.0.3:1000;
}
upstream third_upstream {
server 10.0.0.4:1000;
}
geo $upstream_name {
10.0.0.0/24 first_upstream;
10.0.1.0/24 second_upstream;
default third_upstream;
}
server {
listen 1000 udp;
proxy_pass $upstream_name;
}
}
If you need a load-balancing, see the TCP and UDP Load Balancing article.
I have some video files on my server and I'm trying to limit the number of connections for each video. If I use $binary_remote_addr the user cannot download other video files at the same time. I want to restrict based on link address instead of binary_remote_addr. Do you think this is possible? Can it be done using map?
limit_conn_zone $binary_remote_addr zone=addr:10m;
limit_conn addr 3;
You can use any key you want, like:
limit_conn_zone $request_uri zone=uri:10m;
limit_conn uri 1;
This will allow only one connection per request URI at a time. Increase the value 1 to your needs.
I am using a rate liming based on ip address and below example work's perfectly for that.
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
...
server {
...
location /search/ {
limit_req zone=one burst=5;
}
Now we need to implement rate limiting based on apiid/apikey which will part of http request.Each api key will have restricted number of connections and when that goes beyond the restricted number i must give 503 or some thing like that.
How to get the apikey/apid from url into a variable and set a limit for each apikey we have ?
I am looking for a way to limit the number of maximum concurrent connections to 1.
I do not want a connection limit per IP, I already know this is supported.
As far as I can see, max_conns would be exactly what I'm looking for, but unfortunately it's not available in the free version:
Additionally, the following parameters are available as part of our
commercial subscription
Limiting worker_connections is not an option, as the minimum it wants is 4, and it affects more than the incoming requests.
My conf:
server {
listen 80;
server_name localhost;
location / {
rewrite_by_lua '
[some lua code]
';
proxy_pass http://127.0.0.1:8080;
}
}
Literally moments after I posted this question, I stumbled upon this while googling for how to whitelist IPs from a file in Nginx! Kind of funny considering I spent the last 2 hours googling for specific terms about rate limiting; talk about relevance, heh..
limit_conn_zone $server_name zone=servers:1m;
limit_conn servers 1;
This in the http { block seems to do the trick.
I'd like to throttle incoming requests into an nginx route.
The current config is similar to this:
upstream up0 {
server x.x.x.x:1111;
keepalive 1024;
}
server {
location /auc {
limit_req zone=one burst=2100;
proxy_pass http://up0/auc;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
I'd like to control the number of requests I see at the upstream server. For all other requests I'd like nginx to respond with a 204 response.
Controlling by percentage of incoming requests would also work.
Thanks.
Nginx is very effective at limiting requests using limit_req_zone and limit_req.
First create a zone which has defined limits. For a global limit, the key of the zone can be static, it's also possible to use variables such as the source ip address as a key for the zone which is useful for limiting by specific ip's or just slower pages on your site. The rate can be defined in requests per second or minute.
limit_req_zone key zone=name:size rate=rate;
Next, create a rule to apply that zone to incoming requests. The location directive can be used first to apply the rule only to specific requests or it can be server wide. The burst option will queue a specified number requests that exceed the rate limit and is useful to throttle short bursts of traffic rather than return errors.
limit_req zone=name [burst=number] [nodelay];
The default response code for traffic exceeding the rate limit and not held in a burst queue is 503 (Service Unvailable). Alternate codes like 204 (No content) can be set.
limit_req_status code;
Putting all that together a valid config to limit all requests in the location block to 10 per second with a buffer to queue up to 50 requests before returning errors and return the specified 204 response could would look like:
http {
....
limit_req_zone $hostname zone=limit:20m rate=10r/s;
limit_req_status 204;
server {
...
location / {
...
limit_req zone=limit burst=50;
}
}
}
In practice it's likely the server block will be in a different file included from within the http block. I've just condensed them for clarity.
To test, either use a flood tool or set the request rate=10r/m (10 per minute) and use a browser. It's useful to check the logs and monitor the amount of rejected requests so that you are aware of any impact on your users.
Multiple limit_req_zone rules can be combined to specify loose global limits and then stricter per source ip limits. This will enable targeting of the most persistent few users before the wider user base.