limit_req_zone rate doesn't work as i thought. No matter how much i set it ,it still works as rate=1r/s.
my nginx.conf:
limit_req_zone $binary_remote_addr zone=lwrite:10m rate=300r/s;
...
limit_req zone=lwrite burst=5;
after reading this doc (http://nginx.org/en/docs/http/ngx_http_limit_req_module.html),I thought my nginx should only delay request when a ip visit more than 300r/s,and return 5xx when it visit more than 305/s.
however ,if I run test: ab -c 12 -n 12 '127.0.0.1:8090/index.html?_echo=abc', the output is:
Concurrency Level: 12
Time taken for tests: 0.051 seconds
Complete requests: 12
Failed requests: 6
(Connect: 0, Receive: 0, Length: 6, Exceptions: 0)
Write errors: 0
Non-2xx responses: 6
i found 5 warns and 6 errors in nginx error.log , it turns out only the first visit success immediately,the next 5 would be delay, and the last 6 returned error. So, no matter how high i set ,it still works as rate=1r/s.
why? Does anyone meet same problem with me? my nginx version is 1.5.13 and 1.7.11
Related
I want to block excessive requests on a per-IP basis, allowing at maxium 12 requests per second. For this sake, I have the following in /etc/nginx/nginx.conf:
http {
##
# Basic Settings
##
...
limit_req_zone $binary_remote_addr zone=perip:10m rate=12r/s;
server {
listen 80;
location / {
limit_req zone=perip nodelay;
}
}
Now, when I'm running the command ab -k -c 100 -n 900 'https://www.mywebsite.com/' in the terminal, I get the output with only 179 non-2xx responses out of 900.
Why isn't Nginx blocking most requests?
Taken from here:
with nodelay nginx process all burst requests instantly and without
this option nginx makes excessive requests to wait so that overall
rate would be no more than 1 request per second and last successful
request took 5 seconds to complete.
rate=6r/s actually means one request in 1/6th of a second. So if
you send 6 request simultaneously you'll get 5 of them with 503
The problem might be in your testing method (or your expectations from it).
I am running an nginx-ingress controller in a kubernetes cluster and one of my log statements for the request looks like this:
upstream_response_length: 0, 840
upstream_response_time: 60.000, 0.760
upstream_status: 504, 200
I cannot quite understand what does that mean? Nginx has a response timeout equal to 60 seconds, and tries to request one more time after that (successfully) and logs both requests?
P.S. Config for log format:
log-format-upstream: >-
{
...
"upstream_status": "$upstream_status",
"upstream_response_length": "$upstream_response_length",
"upstream_response_time": "$upstream_response_time",
...
}
According to split_upstream_var method of ingress-nginx, it splits results of nginx health checks.
Since nginx can have several upstreams, your log could be interpreted this way:
First upstream is dead (504)
upstream_response_length: 0 // responce from dead upstream has zero length
upstream_response_time: 60.000 // nginx dropped connection after 60sec
upstream_status: 504 // responce code, upstream doesn't answer
Second upstream works (200)
upstream_response_length: 840 // healthy upstream returned 840b
upstream_response_time: 0.760 // healthy upstream responced in 0.760
upstream_status: 200 // responce code, upstream is ok
P.S. JFYI, here's a cool HTTP headers state diagram
I am learning nginx httprequestlimitmodule. i am not getting the concept of nodelay in httprequestmodule. i have tried below two configuration with nodelay and without nodelay. in both cases with nodelay and without nodelay i am hitting 10 request in 1 seconds and getting 503 temporary service unavailable error for 6 requests and 4 requests are successful. my question is if the result is same with nodelay and without nodelay then what is the use of nodelay option here.
limit_req_zone $binary_remote_addr zone=one:10m rate=2r/s;
limit_req zone=one burst=2 nodelay;
limit_req_zone $binary_remote_addr zone=one:10m rate=2r/s;
limit_req zone=one burst=2 ;
Let's take this config:
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
server {
listen 127.0.0.1:81;
location / {
limit_req zone=one burst=5;
echo 'OK';
}
location /nodelay {
limit_req zone=one burst=5 nodelay;
echo 'OK';
}
}
and test it with nodelay
$ siege -q -b -r 1 -c 10 http://127.0.0.1:81/nodelay
done.
Transactions: 6 hits
Availability: 60.00 %
Elapsed time: 0.01 secs
Data transferred: 0.00 MB
Response time: 0.00 secs
Transaction rate: 600.00 trans/sec
Throughput: 0.09 MB/sec
Concurrency: 0.00
Successful transactions: 6
Failed transactions: 4
Longest transaction: 0.00
Shortest transaction: 0.00
and without nodelay
$ siege -q -b -r 1 -c 10 http://127.0.0.1:81/
done.
Transactions: 6 hits
Availability: 60.00 %
Elapsed time: 5.00 secs
Data transferred: 0.00 MB
Response time: 2.50 secs
Transaction rate: 1.20 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 3.00
Successful transactions: 6
Failed transactions: 4
Longest transaction: 5.00
Shortest transaction: 0.00
They both passed 6 request, buy with nodelay nginx process all burst requests instantly and without this option nginx makes excessive requests to wait so that overall rate would be no more than 1 request per second and last successful request took 5 seconds to complete.
EDIT: rate=6r/s actually means one request in 1/6th of a second. So if you send 6 request simultaneously you'll get 5 of them with 503.
There is a good answer with “bucket” explanation https://serverfault.com/a/247302/211028
TL;DR: The nodelay option is useful if you want to impose a rate limit without constraining the allowed spacing between requests.
There's new documentation from Nginx with examples that answers this: https://www.nginx.com/blog/rate-limiting-nginx/
Here's the pertinent part. Given:
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;
location /login/ {
limit_req zone=mylimit burst=20;
...
}
The burst parameter defines how many requests a client can make in
excess of the rate specified by the zone (with our sample mylimit
zone, the rate limit is 10 requests per second, or 1 every 100
milliseconds). A request that arrives sooner than 100 milliseconds
after the previous one is put in a queue, and here we are setting the
queue size to 20.
That means if 21 requests arrive from a given IP address
simultaneously, NGINX forwards the first one to the upstream server
group immediately and puts the remaining 20 in the queue. It then
forwards a queued request every 100 milliseconds, and returns 503 to
the client only if an incoming request makes the number of queued
requests go over 20.
If you add nodelay:
location /login/ {
limit_req zone=mylimit burst=20 nodelay;
...
}
With the nodelay parameter, NGINX still allocates slots in the queue
according to the burst parameter and imposes the configured rate
limit, but not by spacing out the forwarding of queued requests.
Instead, when a request arrives “too soon”, NGINX forwards it
immediately as long as there is a slot available for it in the queue.
It marks that slot as “taken” and does not free it for use by another
request until the appropriate time has passed (in our example, after
100 milliseconds).
I have a problem with my current setup which is not working as expected and prevents me from going further to have a server-sent events (SSE) enabled web site. My main question can be found below in bold but boils down to "How can I launch an extra thread from a Sinatra web app in a Passenger setup ?".
I use Passenger 5.0.21 and Sinatra 1.4.6. The app is written as a classical Sinatra application, not modular, but that can be changed if necessary.
I have put the directive passenger_min_instances 3 in the Nginx configuration to get a minimum of 3 web app instances launched. I have two puts in the config.ru file of my Sinatra app so when the thread is launched I get feedback inside /var/log/nginx/passenger.log and also when the thread receives messages through its RabbitMQ queue :
...
Thread.new {
puts " [* #{Thread.current.inspect}] Waiting for logs. To exit press CTRL+C"
begin
q.subscribe(:block => true) do |delivery_info, properties, body|
puts " [x #{Thread.current.inspect}] #{body}"
end
rescue Interrupt => _
ch.close
conn.close
end
}
run Sinatra::Application
I expected this code to be run n times, n being the number of processes launched by Passenger. It looks like it is not the case.
Furthermore, my app.rb contains a lot of stuff which can be reduced to :
puts "(CLASS)... Inside thread #{Thread.current.inspect}"
configure do
puts "(CONFIGURE)... Inside thread #{Thread.current.inspect}"
end
get '/debug' do
puts "(DEBUG)... Inside thread #{Thread.current.inspect}"
end
When I restart Nginx and make a first HTTP GET access to the URL /debug, the processes are instantiated and one of the processes serves the request. What do I get in /var/log/nginx/passenger.log ?
(CLASS)... Inside thread #<Thread:0x007fb29f4ca258 run>
(CONFIGURE)... Inside thread #<Thread:0x007fb29f4ca258 run>
[* #<Thread:0x007fb29f7f8038#config.ru:68 run>] Waiting for logs. To exit press CTRL+C
(DEBUG)... Inside thread #<Thread:0x007fb29f4ca8e8#/usr/lib/ruby/vendor_ruby/phusion_passenger
192.168.0.11 - test [30/Dec/2015:10:09:08 +0100] "GET /debug HTTP/1.1" 200 2184 0.0138
Both messages starting with CLASS and CONFIGURE are printed inside the same thread. I expected this to occur at process instantiation time like it did but it occured only one time making me think that Passenger fires only one process. However I can see 3 processes with passenger-status --verbose. Another thread is created (in config.ru) to receive RabbitMQ messages.
As you can see the first process has processed 1 request (shortened for clarity) :
$ passenger-status --verbose
----------- General information -----------
Max pool size : 6
App groups : 1
Processes : 3
Requests in top-level queue : 0
----------- Application groups -----------
/home/hydro/web2/public:
App root: /home/hydro/web2
Requests in queue: 0
* PID: 1116 Sessions: 0 Processed: 1 Uptime: 2m 19s
CPU: 0% Memory : 18M Last used: 2m 19s ago
* PID: 1123 Sessions: 0 Processed: 0 Uptime: 2m 19s
CPU: 0% Memory : 3M Last used: 2m 19s ago
* PID: 1130 Sessions: 0 Processed: 0 Uptime: 2m 19s
CPU: 0% Memory : 2M Last used: 2m 19s ago
The ruby test program which publishes a RabbitMQ message for the subscribers to receive sometimes works and sometimes not. Maybe Passenger shuts down the running process even it has not seen a request in a given time. Nothing appears in the log. No feedback from the subscriber thread, no message from Passenger itself.
If I refresh the page I get the DEBUG message and the GET /debug trace. passenger-status --verbose shows that the first process has served two requests now.
I have seen during my different tests that I have to fire a lot of requests to make Passenger serve requests with the other 2 processes or even start new processes to a maximum of 6. Let's do it from another machine in the same LAN with
root#backup:~# ab -A test:test -kc 1000 -n 10000 https://192.168.0.10:445/debug. Passenger has started the maximum of 6 processes to handle the requests but I can't see anything in the passenger.log file except for DEBUG messages and GET /debug traces as if no other processes had been started.
$ passenger-status --verbose
----------- General information -----------
Max pool size : 6
App groups : 1
Processes : 6
Requests in top-level queue : 0
----------- Application groups -----------
/home/hydro/web2/public:
App root: /home/hydro/web2
Requests in queue: 0
* PID: 1116 Sessions: 0 Processed: 664 Uptime: 16m 29s
CPU: 0% Memory : 28M Last used: 32s ago
* PID: 1123 Sessions: 0 Processed: 625 Uptime: 16m 29s
CPU: 0% Memory : 27M Last used: 32s ago
* PID: 1130 Sessions: 0 Processed: 614 Uptime: 16m 29s
CPU: 0% Memory : 27M Last used: 32s ago
* PID: 2105 Sessions: 0 Processed: 106 Uptime: 33s
CPU: 0% Memory : 23M Last used: 32s ago
* PID: 2112 Sessions: 0 Processed: 103 Uptime: 33s
CPU: 0% Memory : 22M Last used: 32s ago
* PID: 2119 Sessions: 0 Processed: 92 Uptime: 33s
CPU: 0% Memory : 21M Last used: 32s ago
So the main question is : how can I launch a (RabbitMQ subscriber) thread from a Sinatra web application process everytime the process is started ?
I want to be able to send data to my web app processes so they can send it back to the web client using SSE. I would like to have two threads per web app process : the main thread used by Sinatra and my extra thread to do some RabbitMQ stuff. There is also an Oracle database and an Erlang back-end but I don't think they are relevant here.
I am also wondering how Passenger handles process instantiation in the case of a Sinatra web app. Multiple Ruby environment ? How could it be that it looks like the class is instantiated only once if multiple processes are started ? Is the file config.ru (and even app.rb) processed only once even when launching multiple processes ? I have read a lot of things on the web but could not figure this out.
More generally, what is the proper way of doing SSE with Ruby, Nginx, Passenger and Sinatra.
Details concerning Nginx have been put below for clarity.
Nginx is configured as a reverse-proxy standing in front of Passenger and the web application is configure under server and location / with SSL and HTTP basic authentication and the following directives :
location / {
proxy_buffering off;
proxy_cache off;
proxy_pass_request_headers on;
passenger_set_header Host $http_host;
passenger_set_header X-Real-IP $remote_addr;
passenger_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
passenger_set_header X-Forwarded-Proto $scheme;
passenger_set_header X-Remote-User $remote_user;
passenger_set_header Host $http_host;
passenger_min_instances 3;
proxy_redirect off;
passenger_enabled on;
passenger_ruby /home/hydro/.rbenv/versions/2.3.0/bin/ruby;
passenger_load_shell_envvars on;
passenger_nodejs /usr/bin/nodejs;
passenger_friendly_error_pages on;
}
I think your current architecture is wrong. Your Sinatra app shouldn't be mixed with extra threads, or kept alive simply so that it can send push to your clients - you should have a separate push server dedicated to pushing out messages, and let your HTTP API do what it does best - sleep until it receives a request.
You mention you are using nginx, so I'd really recommend compiling in this module:
https://github.com/wandenberg/nginx-push-stream-module
Now you may be able to get rid of your RabbitMQ queue - any process that needs to push a message to one of your push subscribers simply needs to send an HTTP request to this module's RESTful API:
Example curl request:
curl -s -v -X POST 'http://localhost/pub?id=my_channel_1' -d 'Hello World!'
Of course by default, this module will only listen to request from localhost for security reasons.
I have 4 upstream blocks in my nginx config that I'm using depending on the incoming request's scheme or the geo location of the requesting client.
Every time I have to restart nginx it takes around 80 seconds to complete. If I only have 3 upstreams declared it takes about 40 seconds, and with 2 upstreams it restarts pretty much immediately, like it normally does.
Reloads take 1/2 the time (40 seconds with 4 upstreams, 20 seconds with 3 upstreams).
There are no errors logged in the nginx error log, even on debug log level & if I run /usr/sbin/nginx -t it says the test is successful, but takes as long as a reload does.
Nginx resolves ip of all upstreams at (re)start. Check your DNS.