I am looking for a way to limit the number of maximum concurrent connections to 1.
I do not want a connection limit per IP, I already know this is supported.
As far as I can see, max_conns would be exactly what I'm looking for, but unfortunately it's not available in the free version:
Additionally, the following parameters are available as part of our
commercial subscription
Limiting worker_connections is not an option, as the minimum it wants is 4, and it affects more than the incoming requests.
My conf:
server {
listen 80;
server_name localhost;
location / {
rewrite_by_lua '
[some lua code]
';
proxy_pass http://127.0.0.1:8080;
}
}
Literally moments after I posted this question, I stumbled upon this while googling for how to whitelist IPs from a file in Nginx! Kind of funny considering I spent the last 2 hours googling for specific terms about rate limiting; talk about relevance, heh..
limit_conn_zone $server_name zone=servers:1m;
limit_conn servers 1;
This in the http { block seems to do the trick.
Related
Can I limit number of ip's connected to server using nginx?
I want to limit number of clients that can access example.com/download at the same time
I don't want to limit connections per ip, each ip can have multiple connections to server
I searched everywhere but I couldn't find a solution
Yes you can limit specific url in nginx.
For that you need to use limit_conn_zone and limit_conn
http {
limit_conn_zone $binary_remote_addr zone=download:10m;
server {
location /download {
limit_conn download 10; #here you can set limit like 10,20,30 etc..
}
}
}
I have some video files on my server and I'm trying to limit the number of connections for each video. If I use $binary_remote_addr the user cannot download other video files at the same time. I want to restrict based on link address instead of binary_remote_addr. Do you think this is possible? Can it be done using map?
limit_conn_zone $binary_remote_addr zone=addr:10m;
limit_conn addr 3;
You can use any key you want, like:
limit_conn_zone $request_uri zone=uri:10m;
limit_conn uri 1;
This will allow only one connection per request URI at a time. Increase the value 1 to your needs.
Goal: Stand up a service that will accept requests to
http://foo.com/a
and turn around and proxy that request to two different services
http://bar.com/b
http://baz.com/c
The background is that I'm using a service that can integrate with other 3rd party services by accepting post request, and then posting event callbacks to that 3rd party service via posting to a URL. Trouble is that it only supports a single URL in its configuration, so it becomes impossible to integrate more than one service this way.
I've looked into other services like webhooks.io (waaaay too expensive for a moderate amount of traffic) and reflector.io (beta - falls over with a moderate amount of traffic), but so far nothing meets my needs. So I started poking around at standing up my own service, and I'm hoping for as hands-off as possible. Feels like nginx ought to be able to do this...
I came across the following snippet which someone else classified as a bug, but feels like the start of what I want:
upstream apache {
server 1.2.3.4;
server 5.6.7.8;
}
...
location / {
proxy_pass http://apache;
}
Rather than round robin request to apache, that will apparently send the same request to both apache servers, which sounds promising. Trouble is, it sends it to the same path on both server. In my case, the two services will have different paths (/b and /c), and neither is the same path as the inbound request (/a)
So... Any way to specify a destination path on each server in the upstream configuration, or some other clever way of doing this?
You can create local servers. Local servers proxy_pass add the different path (b,c).
upstream local{
server 127.0.0.1:8000;
server 127.0.0.1:8001;
}
location / {
proxy_pass http://local ;
}
server {
listen 8000;
location / {
proxy_pass http://1.2.3.4/b;
}
server {
listen 8001;
location / {
proxy_pass http://5.6.7.8/c;
}
I think I finally grasped how Docker works, so I am getting ready for the next step: cramming a whole bunch of unrelated applications into a single server with a single public IP. Say, for example, that I have a number of legacy Apache2-VHost-based web-sites, so the best I could figure was to run a LAMP container to replicate the current situation, and improve later. For argument sake, here is what I have a container at 172.17.0.2:80 that serves
http://www.foo.com
http://blog.foo.com
http://www.bar.com
Quite straightforward: publishing port 80 lets me correctly access all those sites. Next, I have two services that I need to run, so I built two containers
service-a -> 172.17.0.3:3000
service-b -> 172.17.0.4:5000
and all is good, I can privately access those services from my docker host. The trouble comes when I want to publicly restrict access to service-a through service-a.bar.com:80 only, and to service-b through www.foo.com:5000 only. A lot of reading after, it would seem that I have to create a dreadful artefact called a proxy, or reverse-proxy, to make things more confusing. I have no idea what I'm doing, so I dove nose-first into nginx -- which I had never used before -- because someone told me it's better than Apache at dealing with lots of small tasks and requests -- not that I would know how to turn Apache into a proxy, mind you. Anyway, nginx sounded perfect for a thing that has to take a request a pass it onto another server, so I started reading docs and I produced the following (in addition to the correctly working vhosts):
upstream service-a-bar-com-80 {
server 172.17.0.3:3000;
}
server {
server_name service-a.bar.com;
listen 80;
location / {
proxy_pass http://service-a-bar-com-80;
proxy_redirect off;
}
}
upstream www-foo-com-5000 {
server 172.17.0.4:5000;
}
server {
server_name www.foo.com;
listen 5000;
location / {
proxy_pass http://www-foo-com-5000;
proxy_redirect off;
}
}
Which somewhat works, until I access http://blog.bar.com:5000 which brings up service-b. So, my question is: what am I doing wrong?
nginx (like Apache) always has a default server for a given ip+port combination. You only have one server listening on port 5000, so it is your defacto default server for services on port 5000.
So blog.bar.com (which I presume resolves to the same IP address as www.foo.com) will use the default server for port 5000.
If you want to prevent that server block being the default server for port 5000, set up another server block using the same port, and mark it with the default_server keyword, as follows:
server {
listen 5000 default_server;
root /var/empty;
}
You can use a number of techniques to render the server inaccessible.
See this document for more.
I'd like to throttle incoming requests into an nginx route.
The current config is similar to this:
upstream up0 {
server x.x.x.x:1111;
keepalive 1024;
}
server {
location /auc {
limit_req zone=one burst=2100;
proxy_pass http://up0/auc;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
I'd like to control the number of requests I see at the upstream server. For all other requests I'd like nginx to respond with a 204 response.
Controlling by percentage of incoming requests would also work.
Thanks.
Nginx is very effective at limiting requests using limit_req_zone and limit_req.
First create a zone which has defined limits. For a global limit, the key of the zone can be static, it's also possible to use variables such as the source ip address as a key for the zone which is useful for limiting by specific ip's or just slower pages on your site. The rate can be defined in requests per second or minute.
limit_req_zone key zone=name:size rate=rate;
Next, create a rule to apply that zone to incoming requests. The location directive can be used first to apply the rule only to specific requests or it can be server wide. The burst option will queue a specified number requests that exceed the rate limit and is useful to throttle short bursts of traffic rather than return errors.
limit_req zone=name [burst=number] [nodelay];
The default response code for traffic exceeding the rate limit and not held in a burst queue is 503 (Service Unvailable). Alternate codes like 204 (No content) can be set.
limit_req_status code;
Putting all that together a valid config to limit all requests in the location block to 10 per second with a buffer to queue up to 50 requests before returning errors and return the specified 204 response could would look like:
http {
....
limit_req_zone $hostname zone=limit:20m rate=10r/s;
limit_req_status 204;
server {
...
location / {
...
limit_req zone=limit burst=50;
}
}
}
In practice it's likely the server block will be in a different file included from within the http block. I've just condensed them for clarity.
To test, either use a flood tool or set the request rate=10r/m (10 per minute) and use a browser. It's useful to check the logs and monitor the amount of rejected requests so that you are aware of any impact on your users.
Multiple limit_req_zone rules can be combined to specify loose global limits and then stricter per source ip limits. This will enable targeting of the most persistent few users before the wider user base.