If I have the following timeout rules in my http block:
keepalive_timeout 1s;
send_timeout 1s;
And the following location:
location = /slow {
echo_sleep 10;
echo "So slow";
}
I would expect /slow to trigger a 408 or 504 (timeout), but it's actually honouring that request. Which says to me that I'm handling timeouts incorrectly. So how would I limit the length of time a request takes to be processed by nginx?
The documentation clearly says
Sets a timeout for transmitting a response to the client. The timeout is set only between two successive write operations, not for the transmission of the whole response. If the client does not receive anything within this time, the connection is closed.
echo_sleep 10; and then echo "xxx", so the response time will start from echo and not from echo_sleep
Related
Imagine scenario: Live RTMP broadcast must be conducted form location were it's possible that network problems will occur. There will be a second link (LTE) which can be considered "last resort" because it's not so reliable. Automatic link switching in place, but everything takes time. I thought that it would be possible to first broadcast to some kind of relay station with 1-2minutes buffer so in case of loosing connection it would keep it alive for some time until main location is reconnected to one of links. I've tried nginx-rtmp-module and playing with all kind of options but every time I disconnect source from network there is a hiccup on stream (I've tested it on youtube live stream). First time I try I get few seconds until stream freeze, but from second time it;s almost instant when OBS machine looses connection to internet. Client buffer length on nginx have almost no impact other than time I have to wait for stream to show on youtube.
my config:
rtmp {
ping 10s;
server {
listen 1935;
buflen 20s;
chunk_size 4096;
application live {
idle_streams off;
live on;
record off;
push rtmp://a.rtmp.youtube.com/live2/my_super_duper_key;
}
}
}
I would be very grateful for any help, maybe I should be using something different than nginx?
Is there a way to tell nginx to timeout a request after a certain length of time, regardless of whether data continues to flow through or not?
I know send_timeout will kill it if no data is picked up by the client after so long, but if the client is, for instance, using "curl --limit-rate 1" or something, is there a setting that says "no matter what else is going on, timeout a request after (x) seconds."?
We continually poll our nginx server every 5 seconds, using keep-alive to hold the connection open.
By default keepalive_requests is set to 100, so after 100 requests on the keep-alive connection, nginx disconnects.
Currently we have set keepalive_requests to a very large number to solve this problem, however is there a way to make it infinite?
We want to hold the connection open indefinitely, regardless of how many requests are made on the same keep-alive connection. keepalive_timeout is enough for us.
Currently, the only way to do this is to modify the source. This is the relevant code within nginx:
if (r->keepalive) {
if (clcf->keepalive_timeout == 0) {
r->keepalive = 0;
} else if (r->connection->requests >= clcf->keepalive_requests) {
r->keepalive = 0;
} else {...}
A value of 4294967295 for keepalive_requests corresponds to about 680 years of 5-second requests. If you need more than that, I'd recommend patching the code.
This is my nginx status below:
Active connections: 6119
server accepts handled requests
418584709 418584709 455575794
Reading: 439 Writing: 104 Waiting: 5576
The value of Waiting is much higher than Reading and Writing, is it normal?
Because of the 'keep-alive' is open?
But if I send a large number of requests to the server, the value of Reading and Writing don't increase, so I think there must be a bottleneck of the nginx or any other.
The Waiting time is Active - (Reading + Writing), i.e. connection still opened waiting for either a new request, or the keepalive expiration.
You could change the keepalive default (which is 75 seconds)
keepalive_timeout 20s;
or tell the browser when it should close the connection by adding an optional second timeout in the header sent to the browser
keepalive_timeout 20s 20s;
but in this nginx page about keepalive you see that some browsers do not care about the header (anyway your site wound't gain much thanks to this optional parameter).
The keepalive is a way to reduce the overhead of creating the connection, as, most of the time, a user will navigate through the site etc... (Plus the multiple requests from a single page, to download css, javascript, images etc...)
It depends on your site, you could reduce the keepalive - but keep in mind that establishing connections is expensive. This is a trade-off you have to refine depending on the site statistics. You could also decrease little by little the timeout (75s -> 50, then a week later 30...) and see how the server behaves.
You don't really want to fix it, as "waiting" means keep-alive
connections. They consume almost no resources (socket + about
2.5M of memory per 10000 connections in nginx).
Are the requests short lived? it's possible they're reading/writing then closing in a short amount of time.
If you're genuinely interested in fixing it you can test to see if nginx is bottleneck you could set keep-alive to 0 in your nginx config:
keepalive_timeout 0;
I'm developing a facebook canvas application and I want to load-test it. I'm aware of the facebook restriction on automated testing, so I simulated the graph api calls by creating a fake web application served under nginx and altering my /etc/hosts to point graph.facebook.com to 127.0.0.1.
I'm using jmeter to load-test the application and the simulation is working ok. Now I want to simulate slow graph api responses and see how they affect my application. How can I configure nginx so that it inserts a delay to each request sent to the simulated graph.facebook.com application?
You can slow the speed of localhost (network) by adding delay.
Use ifconfig command to see network device: on localhost it may be lo and on LAN its eth0.
to add delay use this command (adding 1000ms delay on lo network device)
tc qdisc add dev lo root netem delay 1000ms
to change delay use this one
tc qdisc change dev lo root netem delay 1ms
to see current delay
tc qdisc show dev lo
and to remove delay
tc qdisc del dev lo root netem delay 1000ms
My earlier answer works but it is more adapted to a case where all requests need to be slowed down. I've since had to come up with a solution that would allow me to turn on the rate limit only on a case-by-case basis, and came up with the following configuration. Make sure to read the entire answer before you use this, because there are important nuances to know.
location / {
if (-f somewhere/sensible/LIMIT) {
echo_sleep 1;
# Yes, we need this here too.
echo_exec /proxy$request_uri;
}
echo_exec /proxy$request_uri;
}
location /proxy/ {
internal;
# Ultimately, all this goes to a Django server.
proxy_pass http://django/;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
}
Important note: the presence or absence of forward slashes in the various paths makes a difference. For instance, proxy_pass http://django, without a trailing slash, does not do the same thing as the line in the code above.
The principle of operation is simple. If the file somewhere/sensible/LIMIT exists, then requests that match location / are paused for one second before moving on. So in my test suite, when I want a network slowdown, I create the file, and when I want to remove the slowdown, I remove it. (And I have cleanup code that removes it between each test.) In theory I'd much prefer using variables for this than a file, but the problem is that variables are reinitialized with each request. So we cannot have a location block that would set a variable to turn the limit, and another to turn it off. (That's the first thing I tried, and it failed due to the lifetime of variables). It would probably be possible to use the Perl module or Lua to persist variables or fiddle with cookies, but I've decided not to go down these routes.
Important notes:
It is not a good idea to mix directives from the echo module (like echo_sleep and echo_exec) with the stock directives of nginx that result in the production of a response. I initially had echo_sleep together with proxy_pass and got bad results. That's why we have the location /proxy/ block that segregates the stock directives from the echo stuff. (See this issue for a similar conflict that was resolved by splitting a block.)
The two echo_exec directives, inside and outside the if, are necessary due to how if works.
The internal directive prevents clients from directly requesting /proxy/... URLs.
I've modified a nginx config to use limit_req_zone and limit_req to introduce delays. The following reduces the rate of service to 20 requests per second (rate=20r/s). I've set burst=1000 so that my application would not get 503 responses.
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=20r/s;
[...]
server {
[...]
location / {
limit_req zone=one burst=1000;
[...]
}
}
}
The documentation is here. I do not believe there is a way to specify a uniform delay using this method.