nginx specify server for a particular request - nginx

Let's say I have ip_hash; turned on for load balancing between 4 different servers. So, client's IP address is used as a hashing key to determine which server his requests get routed to.
However, for file upload, it's best to keep all files in a single server. So, I want all /upload requests get routed to server 1 for any client. This means all requests obey IP-hash, except POST /upload which must be sent to server 1.
Is there a way to create this exception in NGINX? Thanks!

Define two upstream containers, one with full load balancing and another with the POST specific service requirements:
upstream balancing { ... }
upstream uploading { ... }
Also, within the http container, define a map of the request method:
map $request_method $upstream {
default balancing;
POST uploading;
}
Finally, within the server container, define a specific proxy_pass for the /upload URI:
location / {
proxy_pass http://balancing;
}
location /upload {
proxy_pass http://$upstream;
}
The upstream specification is evaluated from the value of the REQUEST_METHOD.

Related

Can I configure an nginx reverse proxy to not modify specific requests?

For the default server case can the reverse proxy just to send the request to the original intended location?
essentially just letting certain requests "pass thru"
I tried something like this but it did not work
location / {
proxy_pass $request_uri
}

Use nginx to proxy request to two different services?

Goal: Stand up a service that will accept requests to
http://foo.com/a
and turn around and proxy that request to two different services
http://bar.com/b
http://baz.com/c
The background is that I'm using a service that can integrate with other 3rd party services by accepting post request, and then posting event callbacks to that 3rd party service via posting to a URL. Trouble is that it only supports a single URL in its configuration, so it becomes impossible to integrate more than one service this way.
I've looked into other services like webhooks.io (waaaay too expensive for a moderate amount of traffic) and reflector.io (beta - falls over with a moderate amount of traffic), but so far nothing meets my needs. So I started poking around at standing up my own service, and I'm hoping for as hands-off as possible. Feels like nginx ought to be able to do this...
I came across the following snippet which someone else classified as a bug, but feels like the start of what I want:
upstream apache {
server 1.2.3.4;
server 5.6.7.8;
}
...
location / {
proxy_pass http://apache;
}
Rather than round robin request to apache, that will apparently send the same request to both apache servers, which sounds promising. Trouble is, it sends it to the same path on both server. In my case, the two services will have different paths (/b and /c), and neither is the same path as the inbound request (/a)
So... Any way to specify a destination path on each server in the upstream configuration, or some other clever way of doing this?
You can create local servers. Local servers proxy_pass add the different path (b,c).
upstream local{
server 127.0.0.1:8000;
server 127.0.0.1:8001;
}
location / {
proxy_pass http://local ;
}
server {
listen 8000;
location / {
proxy_pass http://1.2.3.4/b;
}
server {
listen 8001;
location / {
proxy_pass http://5.6.7.8/c;
}

Nginx - Redirect requests to all backends

I have 5 backend servers. I want nginx to forward the POST request for /myapp/refresh to all 5 backend servers. For any other request, it can do load balancing. Is this possible ? Can you please give a sample configuration ?
I'm not aware about ready to use solution to do what you want.
It is definetely possible to implement such behavior in C or Lua.
You may develop nginx C module, but it not trivial task with serious learning curve.
You may use https://github.com/openresty/lua-nginx-module and use something like https://github.com/openresty/lua-nginx-module#ngxlocationcapture_multi.
But in both cases you should implement some kind of logic when and which response you will send back.
Question to think about - do you need to respond with 200 OK if one of the backend will time out or responds with error?
You can try use the The ngx_http_mirror_module module (1.13.4), this implements mirroring of an original request by creating background mirror subrequests. Responses to mirror subrequests are ignored. https://nginx.org/en/docs/http/ngx_http_mirror_module.html
You should be able to use nginx as a load balancer using a simple config such as:
http {
upstream myproject {
server 127.0.0.1:8000 weight=3;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}
server {
listen 80;
server_name www.domain.com;
location / {
proxy_pass http://myproject;
}
}
}
docs:
https://www.nginx.com/resources/admin-guide/load-balancer/
This should route all requests including the POST request you mentioned.

how to use nginx as reverse proxy for cross domains

I need to achieve below test case using nginx:
www.example.com/api/ should redirect to ABC.com/api,
while www.example.com/api/site/login should redirect to XYZ.com/api/site/login
But in the browser, user should only see www.example.com/api.... (and not the redirected URL).
Please let me know how this can be achieved.
The usage of ABC.com is forbidden by stackoverflow rules, so in example config I use domain names ABC.example.com and XYZ.example.com:
server {
...
server_name www.example.com;
...
location /api/ {
proxy_set_header Host ABC.example.com;
proxy_pass http://ABC.example.com;
}
location /api/site/login {
proxy_set_header Host XYZ.example.com;
proxy_pass http://XYZ.example.com;
}
...
}
(replace http:// with https:// if needed)
The order of location directives is of no importance because, as the documentation states, the location with the longest matching prefix is selected.
With the proxy_set_header parameter, nginx will behave exactly in the way you need, and the user will see www.example.com/api... Otherwise, without this parameter, nginx will generate HTTP 301 redirection to ABC.example.com or XYZ.example.com.
You don't need to specify a URI in the proxy_pass parameter because, as the documentation states, if proxy_pass is specified without a URI, the request URI is passed to the server in the same form as sent by a client when the original request is processed.
You can specify your servers ABC.example.com and XYZ.example.com as domain names or as IP addresses. If you specify them as domain names, you need to specify the additional parameter resolver in your server config. You can use your local name server if you have one, or use something external like Google public DNS (8.8.8.8) or DNS provided for you by your ISP:
server {
...
server_name www.example.com;
resolver 8.8.8.8;
...
}
Try this:
location /api {
proxy_pass http://proxiedsite.com/api;
}
When NGINX proxies a request, it sends the request to a specified
proxied server, fetches the response, and sends it back to the client.
It is possible to proxy requests to an HTTP server (another NGINX
server or any other server) or a non-HTTP server (which can run an
application developed with a specific framework, such as PHP or
Python) using a specified protocol. Supported protocols include
FastCGI, uwsgi, SCGI, and memcached.
To pass a request to an HTTP proxied server, the proxy_pass directive
is specified inside a location.
Resource from NGINX Docs

nginx conditional proxy pass based on header

I'm trying to manage a deployment to servers running behind an nginx plus server configured as a load balancer. The app servers are sent traffic from nginx using the proxy_pass directive, and what I'd like to do is to direct traffic to one upstream by default, but a different one for testing as we deploy to spare instances; I'm trying to select this by having developers set a header in their browser, which nginx then looks for and sets a variable for the relevant proxy.
It seems all to be sensible, but it simply doesn't work - I'm not sure if I misunderstand how it works, but it does seem odd.
The upstreams are configured as
upstream site-cluster {
zone site 64k;
least_conn;
server 10.0.6.100:80 route=a slow_start=30s;
server 10.0.7.100:80 route=b slow_start=30s;
sticky route $route_cookie $route_uri;
}
upstream site-cluster2 {
zone site 64k;
least_conn;
server 10.0.6.30:80 route=a slow_start=30s;
server 10.0.7.187:80 route=b slow_start=30s;
sticky route $route_cookie $route_uri;
}
And then this code is in the location / block.
map $http_x_newsite $proxyurl {
default http://site-cluster;
"true" http://site-cluster2;
}
proxy_pass $proxyurl;
What happens is it's always the default servers which get sent the traffic, irrespective of whether I set the header or not.
Any ideas?
map directive should be in http context not location:
Syntax: map string $variable { ... }
Default: —
Context: http
The rest looks sensible, works for me.

Resources