nginx conditional proxy pass based on header - nginx

I'm trying to manage a deployment to servers running behind an nginx plus server configured as a load balancer. The app servers are sent traffic from nginx using the proxy_pass directive, and what I'd like to do is to direct traffic to one upstream by default, but a different one for testing as we deploy to spare instances; I'm trying to select this by having developers set a header in their browser, which nginx then looks for and sets a variable for the relevant proxy.
It seems all to be sensible, but it simply doesn't work - I'm not sure if I misunderstand how it works, but it does seem odd.
The upstreams are configured as
upstream site-cluster {
zone site 64k;
least_conn;
server 10.0.6.100:80 route=a slow_start=30s;
server 10.0.7.100:80 route=b slow_start=30s;
sticky route $route_cookie $route_uri;
}
upstream site-cluster2 {
zone site 64k;
least_conn;
server 10.0.6.30:80 route=a slow_start=30s;
server 10.0.7.187:80 route=b slow_start=30s;
sticky route $route_cookie $route_uri;
}
And then this code is in the location / block.
map $http_x_newsite $proxyurl {
default http://site-cluster;
"true" http://site-cluster2;
}
proxy_pass $proxyurl;
What happens is it's always the default servers which get sent the traffic, irrespective of whether I set the header or not.
Any ideas?

map directive should be in http context not location:
Syntax: map string $variable { ... }
Default: —
Context: http
The rest looks sensible, works for me.

Related

How to proxy RDP via Nginx

I'm using the below config in nginx to proxy RDP connection:
server {
listen 80;
server_name domain.com;
location / {
proxy_pass http://192.168.0.100:3389;
}
}
but the connection doesn't go through. My guess is that the problem is http in proxy_pass. Googling "Nginx RDP" didn't yield much.
Anyone knows if it's possible and if yes how?
Well actually you are right the http is the problem but not exactly that one in your code block. Lets explain it a bit:
In your nginx.conf file you have something similar to this:
http {
...
...
...
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
So everything you write in your conf files are inside this http block/scope. But rdp is not http is a different protocol.
The only workaround I know for nginx to handle this is to work on tcp level.
So inside in your nginx.conf and outside the http block you have to declare the stream block like this:
stream {
# ...
server {
listen 80;
proxy_pass 192.168.0.100:3389;
}
}
With the above configuration just proxying your backend on tcp layer with a cost of course. As you may notice its missing the server_name attribute you can't use it in the stream scope, plus you lose all the logging functionality that comes on the http level.
For more info on this topic check the docs
For anyone who is looking to load balance RDP connection using Nginx, here is what I did:
Configure nginx as you normally would, to reroute HTTP(S) traffic to your desired server.
On that server, install myrtille (it needs IIS and .Net 4.5) and you'll be able to RDP into your server from a browser!

Use nginx to proxy request to two different services?

Goal: Stand up a service that will accept requests to
http://foo.com/a
and turn around and proxy that request to two different services
http://bar.com/b
http://baz.com/c
The background is that I'm using a service that can integrate with other 3rd party services by accepting post request, and then posting event callbacks to that 3rd party service via posting to a URL. Trouble is that it only supports a single URL in its configuration, so it becomes impossible to integrate more than one service this way.
I've looked into other services like webhooks.io (waaaay too expensive for a moderate amount of traffic) and reflector.io (beta - falls over with a moderate amount of traffic), but so far nothing meets my needs. So I started poking around at standing up my own service, and I'm hoping for as hands-off as possible. Feels like nginx ought to be able to do this...
I came across the following snippet which someone else classified as a bug, but feels like the start of what I want:
upstream apache {
server 1.2.3.4;
server 5.6.7.8;
}
...
location / {
proxy_pass http://apache;
}
Rather than round robin request to apache, that will apparently send the same request to both apache servers, which sounds promising. Trouble is, it sends it to the same path on both server. In my case, the two services will have different paths (/b and /c), and neither is the same path as the inbound request (/a)
So... Any way to specify a destination path on each server in the upstream configuration, or some other clever way of doing this?
You can create local servers. Local servers proxy_pass add the different path (b,c).
upstream local{
server 127.0.0.1:8000;
server 127.0.0.1:8001;
}
location / {
proxy_pass http://local ;
}
server {
listen 8000;
location / {
proxy_pass http://1.2.3.4/b;
}
server {
listen 8001;
location / {
proxy_pass http://5.6.7.8/c;
}

Domain name and port based proxy

I think I finally grasped how Docker works, so I am getting ready for the next step: cramming a whole bunch of unrelated applications into a single server with a single public IP. Say, for example, that I have a number of legacy Apache2-VHost-based web-sites, so the best I could figure was to run a LAMP container to replicate the current situation, and improve later. For argument sake, here is what I have a container at 172.17.0.2:80 that serves
http://www.foo.com
http://blog.foo.com
http://www.bar.com
Quite straightforward: publishing port 80 lets me correctly access all those sites. Next, I have two services that I need to run, so I built two containers
service-a -> 172.17.0.3:3000
service-b -> 172.17.0.4:5000
and all is good, I can privately access those services from my docker host. The trouble comes when I want to publicly restrict access to service-a through service-a.bar.com:80 only, and to service-b through www.foo.com:5000 only. A lot of reading after, it would seem that I have to create a dreadful artefact called a proxy, or reverse-proxy, to make things more confusing. I have no idea what I'm doing, so I dove nose-first into nginx -- which I had never used before -- because someone told me it's better than Apache at dealing with lots of small tasks and requests -- not that I would know how to turn Apache into a proxy, mind you. Anyway, nginx sounded perfect for a thing that has to take a request a pass it onto another server, so I started reading docs and I produced the following (in addition to the correctly working vhosts):
upstream service-a-bar-com-80 {
server 172.17.0.3:3000;
}
server {
server_name service-a.bar.com;
listen 80;
location / {
proxy_pass http://service-a-bar-com-80;
proxy_redirect off;
}
}
upstream www-foo-com-5000 {
server 172.17.0.4:5000;
}
server {
server_name www.foo.com;
listen 5000;
location / {
proxy_pass http://www-foo-com-5000;
proxy_redirect off;
}
}
Which somewhat works, until I access http://blog.bar.com:5000 which brings up service-b. So, my question is: what am I doing wrong?
nginx (like Apache) always has a default server for a given ip+port combination. You only have one server listening on port 5000, so it is your defacto default server for services on port 5000.
So blog.bar.com (which I presume resolves to the same IP address as www.foo.com) will use the default server for port 5000.
If you want to prevent that server block being the default server for port 5000, set up another server block using the same port, and mark it with the default_server keyword, as follows:
server {
listen 5000 default_server;
root /var/empty;
}
You can use a number of techniques to render the server inaccessible.
See this document for more.

nginx specify server for a particular request

Let's say I have ip_hash; turned on for load balancing between 4 different servers. So, client's IP address is used as a hashing key to determine which server his requests get routed to.
However, for file upload, it's best to keep all files in a single server. So, I want all /upload requests get routed to server 1 for any client. This means all requests obey IP-hash, except POST /upload which must be sent to server 1.
Is there a way to create this exception in NGINX? Thanks!
Define two upstream containers, one with full load balancing and another with the POST specific service requirements:
upstream balancing { ... }
upstream uploading { ... }
Also, within the http container, define a map of the request method:
map $request_method $upstream {
default balancing;
POST uploading;
}
Finally, within the server container, define a specific proxy_pass for the /upload URI:
location / {
proxy_pass http://balancing;
}
location /upload {
proxy_pass http://$upstream;
}
The upstream specification is evaluated from the value of the REQUEST_METHOD.

Nginx, load-balancing using sticky and least connections algorithms simulteniously

We use Nginx as load-balancer for our websocket application. Every backend server keeps session information so every request from client must be forwarded on the same server. So we use ip_hash directive to achieve this:
upstream app {
ip_hash;
server 1;
}
The problem appears when we want to add another backend server:
upstream app {
ip_hash;
server 1;
server 2;
}
New connections go to server 1 and server 2 - but this is not what we need in this situation as load on server 1 continues to increase - we still need sticky sessions but least_conn algorithm enabled too - so our two servers receive approximately equal load.
We also considered using Nginx-sticky-module but the documentaton says that if no sticky cookie available it will fall back to round-robin default Nginx algorithm - so it also does not solve a problem.
So the question is can we combine sticky and least connections logic using Nginx? Do you know which other load balancers solve this problem?
Probably using the split_clients module could help
upstream app {
ip_hash;
server 127.0.0.1:8001;
}
upstream app_new {
ip_hash;
server 127.0.0.1:8002;
}
split_clients "${remote_addr}AAA" $upstream_app {
50% app_new;
* app;
}
This will split your traffic and create the variable $upstreap_app the one you could use like:
server {
location /some/path/ {
proxy_pass http://$upstream_app;
}
This is a workaround to the least_conn and the load balancer that work with sticky sessions, the "downside" is that if more servers need to be added, a new stream needs to be created, for example:
split_clients "${remote_addr}AAA" $upstream_app {
30% app_another_server;
30% app_new;
* app;
}
For testing:
for x in {1..10}; do \
curl "0:8080?token=$(LC_ALL=C; cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)"; done
More info about this module could be found in this article (Performing A/B testing)
You can easily achieve this using HAProxy and I indeed suggest going through it thoroughly to see how your current setup can benefit.
With HA Proxy, you'd have something like:
backend nodes
# Other options above omitted for brevity
cookie SRV_ID prefix
server web01 127.0.0.1:9000 cookie check
server web02 127.0.0.1:9001 cookie check
server web03 127.0.0.1:9002 cookie check
Which simply means that the proxy is tracking requests to-and-fro the servers by using a cookie.
However, if you don't want to use HAProxy, I'd suggest you setup you change your session implementation to use an in-memory DB such as redis/memcached. This way, you can use leastconn or any other algorithm without worrying about sessions.

Resources