According to nginx documentation on limit_req_zone
One megabyte zone can keep about 16 thousand 64-byte states. If the zone storage is exhausted, the server will return the 503 (Service Temporarily Unavailable) error to all further requests.
I wonder in what way these zones get cleared? For example if we have smth like
limit_req_zone $binary_remote_addr zone=one:1m rate=1r/s;
and the number of unique users per a day exceeds 16000 - does it mean that the zone will get overflown and other users will start getting 503 error for the set up location? Or is there a time frame of user's inactivity after which the-user-related-zone-memory will be cleaned?
My main concern here is to set an optimal zone size without a risk of getting it exhausted
in case of high-load.
It should be checked, but as I understood lifetime of the zone items relates to the active connections.
So zone=one:1m can hold up to 16 K unique IPs among currently (simultaneously) active connections (total number of the active connections at the moment can exceed 16 K, because a few connections can be opened from the same IP).
So zone size in mb should be >= number of simultaneous connections from the unique IPs / 16K.
Note that if users share single IP over the NAT that is rather often for USSR providers then you will limit request frequency for the bunch of users that can be very inconvenient for them, so to handle this case you should set rate = simult_users_with_same_ip r/s
From https://www.nginx.com/blog/rate-limiting-nginx
If storage is exhausted when NGINX needs to add a new entry, it removes the oldest entry. If the space freed is still not enough to accommodate the new record, NGINX returns status code 503 (Service Temporarily Unavailable). Additionally, to prevent memory from being exhausted, every time NGINX creates a new entry it removes up to two entries that have not been used in the previous 60 seconds.
>16K entries a day is nothing to worry about. NGINX wipes entries that are inactive for more than a minute.
But if the number of active entries reaches >16K, it gets problematic, in that it might lose entries (and states) in use.
Related
I'm using
red:set_keepalive(max_idle_timeout, pool_size)
(From here: https://github.com/openresty/lua-resty-redis#set_keepalive)
with Nginx and trying to determine the best values to use for max_idle_timeout and pool_size.
If my worker_connections is set to 1024, does it make sense to have a pool_size of 1024?
For max_idle_timeout, is 60000 (1 minute) too "aggressive"? Is it safer to go with a smaller value?
Thanks,
Matt
I think Check List for Issues section of official documentation has a good guideline for sizing your connection pool:
Basically if your NGINX handle n concurrent requests and your NGINX has m workers, then the connection pool size should be configured as n/m. For example, if your NGINX usually handles 1000 concurrent requests and you have 10 NGINX workers, then the connection pool size should be 100.
So, if you expect 1024 concurrent requests that actually connect to Redis then a good size for your pool is 1024/worker_processes. Maybe a few more to account for uneven request distribution among workers.
Your keepalive should be long enough to account for the way traffic arives. If your traffic is constant then you can lower your timeout. Or stay with 60 seconds, in most cases longer timeout won't make any noticeable difference.
https://golang.org/pkg/net/http/#Transport.MaxIdleConnsPerHost controls the number of keep alive connections per host. The problem I’m having is that I don’t see how the number of keep alive connections per host can exceed 1.
If one is already cached, getConn will return the cached connection. If there’s no cached connection it will create a new one.
How could there be more than one per host?
Nginx's log has the ability to log the $bytes_sent and I am trying to figure out if you could simply collect the requests from the last second, sum them up and get the bandwidth per second.
My question is, when nginx logs the bytes sent for a http 200 request, is this really the amount of data that the customer received, or in other words, does this really represent the current bits per second that I would observe on this TCP port (say, 8080) if I used something like iftop?
My goal is to find a way to log the bits per second going out of the Nginx vhost(server block).
Nginx worker_connections sets the maximum number of simultaneous connections that can be opened by a worker process. This number includes all connections (e.g. connections with proxied servers, among others), not only connections with clients. Another consideration is that the actual number of simultaneous connections cannot exceed the current limit on the maximum number of open files. I have few queries around this:
What should be the optimal or recommended value for this?
What are the downsides of using a high number of worker connections?
Setting lower limits may be useful when you may be resource-constrained. Some connections, for example, keep-alive connections, are effectively wasting your resources (even if nginx is very efficient, which it is), and aren't required for correct operation of a general-purpose server.
Having a lower resource limit will indicate to nginx that you are low on physical resources, and those available should be allocated to new connections, rather than to serve the idling keep-alive connections.
What is the recommended value? It's the default.
The defaults are all documented within the documentation:
Default: worker_connections 512;
And can be confirmed in the source-code at event/ngx_event.c, too
13#define DEFAULT_CONNECTIONS 512
Can I determine from an ASP.NET application the transfer rate, i.e. how many KB per second are transferd?
You can set some performance counters on ASP.NET.
See here for some examples.
Some specific ones that may help you figure out what you want are:
Request Bytes Out Total
The total size, in bytes, of responses sent to a client. This does not include standard HTTP response headers.
Requests/Sec
The number of requests executed per second. This represents the current throughput of the application. Under constant load, this number should remain within a certain range, barring other server work (such as garbage collection, cache cleanup thread, external server tools, and so on).
Requests Total
The total number of requests since the service was started.
There are a number of debugging tools you can use to check this at the browser. It will of course vary by page, cache settings, server load, network connection speed, etc.
Check out http://www.fiddlertool.com/fiddler/
Or if you are using Firefox, the FireBug add-in http://addons.mozilla.org/en-US/firefox/addon/1843