I'm developing a facebook canvas application and I want to load-test it. I'm aware of the facebook restriction on automated testing, so I simulated the graph api calls by creating a fake web application served under nginx and altering my /etc/hosts to point graph.facebook.com to 127.0.0.1.
I'm using jmeter to load-test the application and the simulation is working ok. Now I want to simulate slow graph api responses and see how they affect my application. How can I configure nginx so that it inserts a delay to each request sent to the simulated graph.facebook.com application?
You can slow the speed of localhost (network) by adding delay.
Use ifconfig command to see network device: on localhost it may be lo and on LAN its eth0.
to add delay use this command (adding 1000ms delay on lo network device)
tc qdisc add dev lo root netem delay 1000ms
to change delay use this one
tc qdisc change dev lo root netem delay 1ms
to see current delay
tc qdisc show dev lo
and to remove delay
tc qdisc del dev lo root netem delay 1000ms
My earlier answer works but it is more adapted to a case where all requests need to be slowed down. I've since had to come up with a solution that would allow me to turn on the rate limit only on a case-by-case basis, and came up with the following configuration. Make sure to read the entire answer before you use this, because there are important nuances to know.
location / {
if (-f somewhere/sensible/LIMIT) {
echo_sleep 1;
# Yes, we need this here too.
echo_exec /proxy$request_uri;
}
echo_exec /proxy$request_uri;
}
location /proxy/ {
internal;
# Ultimately, all this goes to a Django server.
proxy_pass http://django/;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
}
Important note: the presence or absence of forward slashes in the various paths makes a difference. For instance, proxy_pass http://django, without a trailing slash, does not do the same thing as the line in the code above.
The principle of operation is simple. If the file somewhere/sensible/LIMIT exists, then requests that match location / are paused for one second before moving on. So in my test suite, when I want a network slowdown, I create the file, and when I want to remove the slowdown, I remove it. (And I have cleanup code that removes it between each test.) In theory I'd much prefer using variables for this than a file, but the problem is that variables are reinitialized with each request. So we cannot have a location block that would set a variable to turn the limit, and another to turn it off. (That's the first thing I tried, and it failed due to the lifetime of variables). It would probably be possible to use the Perl module or Lua to persist variables or fiddle with cookies, but I've decided not to go down these routes.
Important notes:
It is not a good idea to mix directives from the echo module (like echo_sleep and echo_exec) with the stock directives of nginx that result in the production of a response. I initially had echo_sleep together with proxy_pass and got bad results. That's why we have the location /proxy/ block that segregates the stock directives from the echo stuff. (See this issue for a similar conflict that was resolved by splitting a block.)
The two echo_exec directives, inside and outside the if, are necessary due to how if works.
The internal directive prevents clients from directly requesting /proxy/... URLs.
I've modified a nginx config to use limit_req_zone and limit_req to introduce delays. The following reduces the rate of service to 20 requests per second (rate=20r/s). I've set burst=1000 so that my application would not get 503 responses.
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=20r/s;
[...]
server {
[...]
location / {
limit_req zone=one burst=1000;
[...]
}
}
}
The documentation is here. I do not believe there is a way to specify a uniform delay using this method.
Related
My goal is to configure nginx's stream object(s) in the config to route requests to a backup upstream in the event that one fails on certain health checks (2/3)
The health checks while sort of specific I believe shouldn't be an issue:
-TCP 1212 availability
-TCP 1912 availability
-HTTP GET on 7078 /?
-Response should be 200 and if I can get the body somehow to check that it's as expected, even better!
If these checks fail on one upstream "cluster" so to speak, I would like to route requests to another identical cluster, much like a back up.
The issue I'm solving lies in the fact that the servers are quite literally half a world apart and so load balancing through one server would cause the same latency as if you waited for it to fail. So while a load balancer would have "routing" behavior in the end, the response time would be unacceptable.
Is there a way to do this in NGINX configs or am I spreading it too thin?
The NGINX upstream module will do passive health checks for you, meaning it will react to connection failures, and optionally switch to backup servers as necessary. To some extent, that might be enough for you.
What you're describing here though are active health checks that let you check different ports from the traffic port, assert HTTP status, header values and even body content. Unfortunately, having dangled that in front of you, these are only available as part of the NGINX Commercial Subscription, which I'm guessing isn't what you're looking for.
If you do need that kind of pro-active health checks, you can still do it from outside of NGINX. One approach might be:
put your upstreams in separate confs, and include one of them where you need it
use ncat and/or curl in a every-minute cron job to do the tests that matter to you
if ever those tests fail, switch out the upstream confs, and tell NGINX to do a zero-downtime reload
You can switch confs by fast mv to rename the right one to match the include, you shouldn't have to rewrite anything.
I want to limit requests to my service depending on the time of day.
It should be 4 r/s by night (21:00 - 06:59) and 1 r/s by day (07:00-20:59).
I could have 2 different req_limit_zone like
limit_req_zone $server_name zone=day:1m rate=1r/s;
limit_req_zone $server_name zone=night:1m rate=4r/s;
but how can I differ them in limit_req depending on the time of day?
The service behind nginx is an API wrapper for another service with limits described above. My service is written with Flask-RESTful, runs by uWSGI with a few processes, so it would be pretty painful to implement limiting logic and sync it between processes with something like redis.
Is it possible to configure nginx this way?
If it's not, are there any common workarounds? How do other services solve this task?
To be more specific, it should be a leaking bucket, so clients just wait for an answer without any errors like HTTP 429 Too Many Requests.
The answer
The key is to bind limit_req_zone and map.
There are two relevant notes about them in the docs. map:
default value
sets the resulting value if the source value matches none of the specified variants. When default is not specified, the default resulting value will be an empty string.
and limit_req_zone:
limit_req_zone key zone=name:size rate=rate;
Sets parameters for a shared memory zone that will keep states for various keys. In particular, the state stores the current number of excessive requests. The key can contain text, variables, and their combination. Requests with an empty key value are not accounted.
So one does simply use both day and night limit_req_zone with $server_name as key if it should work and with empty string if it shouldn't. And map will return either $server_name or empty string depending on the time of day.
map $date_gmt $day {
# 07:00-20:59 GMT
~(0[7-9]|1[0-9]|20):[0-5][0-9]:[0-5][0-9] $server_name;
}
map $date_gmt $night {
# 21:00-06:59 GMT
~(2[1-4]|0[0-6]):[0-5][0-9]:[0-5][0-9] $server_name;
}
limit_req_zone $day zone=day_zone:1m rate=1r/s;
limit_req_zone $night zone=night_zone:1m rate=4r/s;
...
limit_req zone=day_zone burst=100;
limit_req zone=night_zone burst=100;
Some notes about req_limit
At first attempt I tried to use mapped variable in limit_req, but nginx don't understand such syntax. Further more, at first glance nginx -s reload worked without any problem, but in fact there was no any reload and only service nginx restart showed an error (nginx version: 1.12.2). So I'd like to show how one should not to do:
limit_req_zone $server_name zone=day:1m rate=1r/s;
limit_req_zone $server_name zone=night:1m rate=4r/s;
map $date_gmt $time_of_day {
~(0[7-9]|1[0-9]|20):[0-5][0-9]:[0-5][0-9] day;
~(2[1-4]|0[0-6]):[0-5][0-9]:[0-5][0-9] night;
}
...
limit_req zone=$time_of_day burst=100;
Some notes about performance
It's not the best solution because of double time checking on every request. There is option to have two separate nginx configs and switch them with something like cron. It would be ugly and buggy because you should remember to edit two configs, but it would work faster; I think this option should be used as a last resort. If there is an ability for horizontal scaling, it's better to load balance multiple servers.
In my case it's not a big deal: service gets just about 8k requests per hour (2.2 r/s).
Pretty much the tile. Got a node app behind nginx, and when i restart the app i would like nginx to delay the response, and retry doing the request a couple of times with some delay inbetween. Everything that i found would only instantly retry N times, but that obviously is no useful when the app is down for a restart, which is my use-case. Is there some way? I dont even care how hacky it is / if it is, i just need a solution that is not starting a second instance of the app, and killing the first one when the second one started.
Thanks!
You can add same server as multiple upstream and configure proxy_next_upstream, proxy_next_upstream_timeout and proxy_next_upstream_tries options as well. Reference
upstream node_servers {
server 127.0.0.1:12005;
server 127.0.0.1:12005;
}
...
proxy_next_upstream http_502;
proxy_next_upstream_timeout 60;
proxy_next_upstream_tries 3;
However, I would recommend you to use process managers like pm2 which support graceful reload/restart. They are relevant if you are using more than one CPU for your nodejs server in a clustered mode.
How I could call all upstreams at once and return a result from first that respond and the response will not be a 404?
Example:
Call to load balancer at "serverX.org/some-resource.png" creates two requests to:
srv1.serverX.org/some-resource.png
srv2.serverX.org/some-resource.png
srv2 responds faster and the response is shown to the user.
Is this possible at all? :)
Thanks!
Short answer, NO. You can't do exactly what you described with nginx. Come to think of it a bit, this operation can't be called load balancing since the whole back-end gets the total amount of traffic.
A good question is what do you think that you could accomplish with that? Better performance?
You can be sure that you will have better results with simple load balancing between your servers since the will have to handle the half of the traffic.
In case that you have a more complex architecture i.e. different loads from different paths to your backend servers we could discuss a more sophisticated load balancing method.
So if your purpose is sth else than performance there are some things that you can do:
1) After you sent the request to first server you can send it using the post_action to another one.
location ~ ^/*.png {
proxy_pass http://srv1.serverX.org;
...
post_action #mirror_to_srv2;
...
}
location #mirror_to_srv2 {
proxy_ignore_client_abort on;
...
proxy_pass http://srv2.serverX.org;
}
2) The request is available to you in nginx as a variable so with some lua scripting you can send it where ever you want.
Note that the above methods are not useful to tackle performance issues but to enable you to do things like mirroring live traffic to dev servers for test/debug purposes.
Last this one seems to provide the functionality you want but remember that isn't built for the use that you seem to have in mind.
We're using Nginx to load balance between two upstream app servers and we'd like to be able to take one or the other down when we deploy to it. We're finding that when we shut one down, Nginx is not failing over to the other. It keeps sending requests and logging errors.
Our upstream directive has the form:
upstream app_servers {
server 10.100.100.100:8080;
server 10.100.100.200:8080;
}
Our understanding from reading the Nginx docs is that we don't need to explictly specify the "max_fails" or "fail_timeout" because they have reasonable defaults. (ie. max_fails of 1).
Any idea what we might be missing here?
Thanks much.
As per the documentation...
max_fails = NUMBER - number of unsuccessful attempts at communicating
with the server within the time period (assigned by parameter
fail_timeout) after which it is considered inoperative. If not set,
the number of attempts is one. A value of 0 turns off this check. What
is considered a failure is defined by proxy_next_upstream or
fastcgi_next_upstream (except http_404 errors which do not count
towards max_fails).
As per the documentation, a failure is define by proxy_next_upstream or fastcgi_next_upstream.
It keeps sending requests and logging errors.
Please check the log what type of errors were logged, if it is not the default (error or timeout), then you may want to define it exclusively in proxy_next_upstream or fastcgi_next_upstream.