Nginx statistic page shows current requests per second from all ip addresses. But for configure http_req module need to specify limit to ONE ip. Is there a way to detect current requests per second from each ip address deal with nginx?
Also will be nice if you tell about how you realize which limit req/sec for your nginx configuration.
To limit request from all clients you need to add
limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;
to your config file. That code example limits 1request/second for all clients.
Check this out for complete tutorial.
Related
I use Nginx to handle HTTP requests. During access log inspection, I found a lot of suspicious requests from the same IP address.
I'd like to configure Nginx to refuse connections from hosts like that one; I don't think that there will be a lot of hosts because it was the first one for years.
This is basically how the Nginx geo-ip module works, I've done a similar thing to whitelist Google crawlers on my sites.
In your http block define a geo directive and add the CIDR ip ranges you wish to block:
geo $badips {
default 0;
64.233.160.0/19 1;
66.102.0.0/20 1;
...
}
This will set the value of variable $badips to 1 for requests originating from those ip addresses.
Then in your server block, before any location blocks, add:
if ($badips) {
return 444;
}
Reload Nginx and that's it, requests which trigger $bdips to be set to 1 will be server a 444 response code (you can change it to another if you prefer).
If you want to keep the banned addresses in a different file then you can do that and inside the geo directive just add include path/to/file;. Syntax within the included file must be the same as above.
So actually I'm using nginx to reverse proxy (and load balancing) some API backend servers with nginx and I'm using the directive limit_req_zone to limit max requests per IP and URI. No problem with that.
Eventually we might need to scale out and add a couple more of nginx instances. Every nginx instance uses a "shared memory zone" to temporary save (in a cache, I guess) every request so it can properly check if the request passes or not accordingly with the limit_req_zone mentioned above. That being said, how does nginx handles it if multiple nginx are running at same time?
For example:
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
This tells nginx to only allow 1 request per second coming from the same IP address, but what about if the second request (within the same second) comes to another nginx instance? As I understand, it will pass because they not share the shared memory where it stores the cache, I guess.
I've been trying to research a bit about it but could't find anything. Any help would be appreciate.
If by multiple nginx you mean multiple master processes, I'm not completely sure what the result is. To have multiple master processes running, they would need to have different configs / different ports to bind.
For worker processes with a single master instance, the shared memory is precisely that, shared, and all of the workers will limit the requests together. The code documentation says:
shared_memory — List of shared memory zones, each added by calling the ngx_shared_memory_add() function. Shared zones are mapped to the same address range in all nginx processes and are used to share common data, for example the HTTP cache in-memory tree.
http://nginx.org/en/docs/dev/development_guide.html#cycle
In addition, there's a blog entry about limit_req stating the following:
Zone – Defines the shared memory zone used to store the state of each IP address and how often it has accessed a request‑limited URL. Keeping the information in shared memory means it can be shared among the NGINX worker processes. The definition has two parts: the zone name identified by the zone= keyword, and the size following the colon. State information for about 16,000 IP addresses takes 1 megabyte, so our zone can store about 160,000 addresses.
Taken from https://www.nginx.com/blog/rate-limiting-nginx/
Let's say I have this DNS entry: mysite.sample. I am developing, and have a copy of my website running locally in http://localhost:8080. I want this website to be reachable using the (fake) DNS: http://mysite.sample, without being forced to remember in what port this site is running. I can setup /etc/hosts and nginx to do proxing for that, but ... Is there an easier way?
Can I somehow setup a simple DNS entry using /etc/hosts and/or dnsmasq where also a non-standard port (something different than :80/:443) is specified? Without the need to provide extra configuration for nginx?
Or phrased in a simpler way: Is it possible to provide port mappings for dns entries in /etc/hosts or dnsmasq?
DNS has nothing to do with the TCP port. DNS is there to resolv names (e.g. mysite.sample) into IP addresses - kind of like a phone book.
So it's a clear "NO". However, there's another solution and I try to explain it.
When you enter http://mysite.sample:8080 in your browser URL bar, your client (e.g. browser) will first try to resolve mysite.sample (via OS calls) to an IP address. This is where DNS kicks in, as DNS is your name resolver. If that happened, the job of DNS is finished and the browser continues.
This is where the "magic" in HTTP happens. The browser is connecting to the resolved IP address and the desired port (by default 80 for http and 443 for https), is waiting for the connection to be accepted and is then sending the following headers:
GET <resource> HTTP/1.1
Host: mysite.sample:8080
Now the server reads those headers and acts accordingly. Most modern web servers have something called "virtual hosts" (i.e. Apache) or "sites" (i.e. nginx). You can configure multiple vhosts/sites - one for each domain. The web server will then provide the site matching the requested host (which is retreived by the browser from the URL bar and passed to the server via Host HTTP header). This is pure HTTP and has nothing to do with TCP.
If you can't change the port of your origin service (in your case 8080), you might want to setup a new web server in front of your service. This is also called reverse proxy. I recommend reading the NGINX Reverse Proxy docs, but you can also use Apache or any other modern web server.
For nginx, just setup a new site and redirect it to your service:
location mysite.example {
proxy_pass http://127.0.0.1:8080;
}
There is a mechanism in DNS for discovering the ports that a service uses, it is called the Service Record (SRV) which has the form
_service._proto.name. TTL class SRV priority weight port target.
However, to make use of this record you would need to have an application that referenced that record prior to making the call. As Dominique has said, this is not the way HTTP works.
I have written a previous answer that explains some of the background to this, and why HTTP isn't in the standard. (the article discusses WS, but the underlying discussion suggested adding this to the HTTP protocol directly)
Edited to add -
There was actually a draft IETF document exploring an official way to do this, but it never made it past draft stage.
This document specifies a new URI scheme called http+srv which uses a DNS SRV lookup to locate a HTTP server.
There is an specific SO answer here which points to an interesting post here
I want to deny users access to my site if they have made X requests in Y milliseconds. According to Microsoft I can use dynamic ip security in the web config.
This is how my config, that I use for testing the ip throttle, looks:
<security>
<dynamicIpSecurity>
<denyByRequestRate enabled="true" maxRequests="2" requestIntervalInMilliseconds="10000"></denyByRequestRate>
</dynamicIpSecurity>
</security>
Now to my problem: Since I'm using Cloudflare I won't see the real IP of my visitors. According to Cloudflare, they provide the real IP in a couple of headers. Although, it does not seem that Azure looks at these headers when checking if the IP should be denied.
My question is: Is there some way I can still use the web config way of denying requests by IP or do I need to use a code solution instead?
I think that I found the answer to my own question. I'll post my findings here for people that might have the same problem.
According to this there is an attribute called "enableProxyMode" which
Enables IIS not only to block requests from a client IP that is seen
by IIS, but also to block requests from IP addresses that are received
in the x-forwarded-for HTTP header.
Since Cloudflare puts the real IP of the visitor in the X-Forwarded-For HTTP header it works perfectly.
We use nginx with an application server as a backend.
We need to limit number of simultaneous connections per IP to backend. We used limit_conn nginx directive for this purpose. But it doesn't work well in all cases.
If user generates a lot of connections from one IP and quickly closes them, then nginx passes this request to a backend, but because client connection is already closed, this connection is not count in limit_conn.
Is it possible to limit number of simultaneous connections per IP to backend server with nginx?
You may want to set
proxy_ignore_client_abort off;
Determines should the connection with a proxied server be closed if a
client closes a connection without waiting for a response.
from the documentation
Another suggestion is to use limit_req to limit the request rate.
I'm afraid this facility is not yet available for nginx out of the box. According to the Nginx FAQ
Many users have requested that Nginx implement a feature in the load
balancer to limit the number of requests per backend (usually to one).
While support for this is planned, it's worth mentioning that demand
for this feature is rooted in misbehaviour on the part of the
application being proxied
I've seen some 3rd parties module for that nginx-limit-upstream but I've never tried.