We use Nginx as load-balancer for our websocket application. Every backend server keeps session information so every request from client must be forwarded on the same server. So we use ip_hash directive to achieve this:
upstream app {
ip_hash;
server 1;
}
The problem appears when we want to add another backend server:
upstream app {
ip_hash;
server 1;
server 2;
}
New connections go to server 1 and server 2 - but this is not what we need in this situation as load on server 1 continues to increase - we still need sticky sessions but least_conn algorithm enabled too - so our two servers receive approximately equal load.
We also considered using Nginx-sticky-module but the documentaton says that if no sticky cookie available it will fall back to round-robin default Nginx algorithm - so it also does not solve a problem.
So the question is can we combine sticky and least connections logic using Nginx? Do you know which other load balancers solve this problem?
Probably using the split_clients module could help
upstream app {
ip_hash;
server 127.0.0.1:8001;
}
upstream app_new {
ip_hash;
server 127.0.0.1:8002;
}
split_clients "${remote_addr}AAA" $upstream_app {
50% app_new;
* app;
}
This will split your traffic and create the variable $upstreap_app the one you could use like:
server {
location /some/path/ {
proxy_pass http://$upstream_app;
}
This is a workaround to the least_conn and the load balancer that work with sticky sessions, the "downside" is that if more servers need to be added, a new stream needs to be created, for example:
split_clients "${remote_addr}AAA" $upstream_app {
30% app_another_server;
30% app_new;
* app;
}
For testing:
for x in {1..10}; do \
curl "0:8080?token=$(LC_ALL=C; cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)"; done
More info about this module could be found in this article (Performing A/B testing)
You can easily achieve this using HAProxy and I indeed suggest going through it thoroughly to see how your current setup can benefit.
With HA Proxy, you'd have something like:
backend nodes
# Other options above omitted for brevity
cookie SRV_ID prefix
server web01 127.0.0.1:9000 cookie check
server web02 127.0.0.1:9001 cookie check
server web03 127.0.0.1:9002 cookie check
Which simply means that the proxy is tracking requests to-and-fro the servers by using a cookie.
However, if you don't want to use HAProxy, I'd suggest you setup you change your session implementation to use an in-memory DB such as redis/memcached. This way, you can use leastconn or any other algorithm without worrying about sessions.
Related
I know I could use some like this:
stream {
upstream ssh {
server X.X.X.X:22;
}
server {
listen 2222;
proxy_pass ssh;
}
}
to proxy pass incoming traffic to port 2222 to another IP's port 22.
Straightforward. But, is there a way to create a dynamic proxy that accepts final destination's hostname and port as parameters?
Something that could be used like this:
proxy_hostname:8080?destination_hostname=example.com&destination_port=1111
ngx_stream_core_module does not accept url parameters. Could nginx be used as a dymanic proxy or only for static tunneling?
I'm asking this because I need a way to hide the IP of a machine firing php mysql requests.
mysqli_connect($hostname, ...)
right now I cannot specify a proxy for the php script alone, only for the entire machine.
Maybe with a small script and fcgiwrap:
https://www.nginx.com/resources/wiki/start/topics/examples/fcgiwrap/
fcgiwrap calls a bash script where you can convert the URI to the program you want to call (mysql) and return the output to nginx as web content.
You could also alter the config of nginx and reload the service. This way you could "dynamicly" open/forward ports. Quite insecure if you make it publicly available.
Goal: Stand up a service that will accept requests to
http://foo.com/a
and turn around and proxy that request to two different services
http://bar.com/b
http://baz.com/c
The background is that I'm using a service that can integrate with other 3rd party services by accepting post request, and then posting event callbacks to that 3rd party service via posting to a URL. Trouble is that it only supports a single URL in its configuration, so it becomes impossible to integrate more than one service this way.
I've looked into other services like webhooks.io (waaaay too expensive for a moderate amount of traffic) and reflector.io (beta - falls over with a moderate amount of traffic), but so far nothing meets my needs. So I started poking around at standing up my own service, and I'm hoping for as hands-off as possible. Feels like nginx ought to be able to do this...
I came across the following snippet which someone else classified as a bug, but feels like the start of what I want:
upstream apache {
server 1.2.3.4;
server 5.6.7.8;
}
...
location / {
proxy_pass http://apache;
}
Rather than round robin request to apache, that will apparently send the same request to both apache servers, which sounds promising. Trouble is, it sends it to the same path on both server. In my case, the two services will have different paths (/b and /c), and neither is the same path as the inbound request (/a)
So... Any way to specify a destination path on each server in the upstream configuration, or some other clever way of doing this?
You can create local servers. Local servers proxy_pass add the different path (b,c).
upstream local{
server 127.0.0.1:8000;
server 127.0.0.1:8001;
}
location / {
proxy_pass http://local ;
}
server {
listen 8000;
location / {
proxy_pass http://1.2.3.4/b;
}
server {
listen 8001;
location / {
proxy_pass http://5.6.7.8/c;
}
I have 5 backend servers. I want nginx to forward the POST request for /myapp/refresh to all 5 backend servers. For any other request, it can do load balancing. Is this possible ? Can you please give a sample configuration ?
I'm not aware about ready to use solution to do what you want.
It is definetely possible to implement such behavior in C or Lua.
You may develop nginx C module, but it not trivial task with serious learning curve.
You may use https://github.com/openresty/lua-nginx-module and use something like https://github.com/openresty/lua-nginx-module#ngxlocationcapture_multi.
But in both cases you should implement some kind of logic when and which response you will send back.
Question to think about - do you need to respond with 200 OK if one of the backend will time out or responds with error?
You can try use the The ngx_http_mirror_module module (1.13.4), this implements mirroring of an original request by creating background mirror subrequests. Responses to mirror subrequests are ignored. https://nginx.org/en/docs/http/ngx_http_mirror_module.html
You should be able to use nginx as a load balancer using a simple config such as:
http {
upstream myproject {
server 127.0.0.1:8000 weight=3;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}
server {
listen 80;
server_name www.domain.com;
location / {
proxy_pass http://myproject;
}
}
}
docs:
https://www.nginx.com/resources/admin-guide/load-balancer/
This should route all requests including the POST request you mentioned.
I'm trying to manage a deployment to servers running behind an nginx plus server configured as a load balancer. The app servers are sent traffic from nginx using the proxy_pass directive, and what I'd like to do is to direct traffic to one upstream by default, but a different one for testing as we deploy to spare instances; I'm trying to select this by having developers set a header in their browser, which nginx then looks for and sets a variable for the relevant proxy.
It seems all to be sensible, but it simply doesn't work - I'm not sure if I misunderstand how it works, but it does seem odd.
The upstreams are configured as
upstream site-cluster {
zone site 64k;
least_conn;
server 10.0.6.100:80 route=a slow_start=30s;
server 10.0.7.100:80 route=b slow_start=30s;
sticky route $route_cookie $route_uri;
}
upstream site-cluster2 {
zone site 64k;
least_conn;
server 10.0.6.30:80 route=a slow_start=30s;
server 10.0.7.187:80 route=b slow_start=30s;
sticky route $route_cookie $route_uri;
}
And then this code is in the location / block.
map $http_x_newsite $proxyurl {
default http://site-cluster;
"true" http://site-cluster2;
}
proxy_pass $proxyurl;
What happens is it's always the default servers which get sent the traffic, irrespective of whether I set the header or not.
Any ideas?
map directive should be in http context not location:
Syntax: map string $variable { ... }
Default: —
Context: http
The rest looks sensible, works for me.
I want to replace pound with nginx as loadbalancer and all tests look fine so far. I will do a typical upstream configuration like this:
upstream backend {
ip_hash;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
There are now 2 questions left open:
How long does this stickyness last? Is there a ttl to be defined somewhere?
Does the stickyness survive restarts and/or reloads of nginx?
I could not find the answer in the nginx wiki. Links to official docs are welcome.
It is based on client source ip address hash and as long as you have same set of backends stickiness will persist.
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ip_hash
It comes up when you do feel for need of session persistency. Scenario is like the users should be directed to same server as application demands based on previous connection.
ip_hash = key-value pair hashing [where key=visitor's ip, value=host server]