Can I configure an nginx reverse proxy to not modify specific requests? - nginx

For the default server case can the reverse proxy just to send the request to the original intended location?
essentially just letting certain requests "pass thru"
I tried something like this but it did not work
location / {
proxy_pass $request_uri
}

Related

Nginx proxy_pass changes behavior when defining the target in a variable

I'm reverse proxying an AWS API Gateway stage using nginx. This is pretty straightforward:
location /api {
proxy_pass https://xxxxxxxxxx.execute-api.eu-central-1.amazonaws.com:443/production;
proxy_ssl_server_name on;
}
However, this approach will make nginx serve a stale upstream when the DNS entry for xxxxxxxxxx.execute-api.eu-central-1.amazonaws.com changes as it's resolving the entry once on startup.
Following this article: https://www.nginx.com/blog/dns-service-discovery-nginx-plus/ I am now trying to define my proxy target in a variable like this:
location /api {
set $apigateway xxxxxxxxxx.execute-api.eu-central-1.amazonaws.com/production;
proxy_pass https://$apigateway:443/;
proxy_ssl_server_name on;
}
This will make API Gateway respond with a ForbiddenException: Forbidden to requests that would pass the previous setup without using a variable. Reading this document: https://aws.amazon.com/de/premiumsupport/knowledge-center/api-gateway-troubleshoot-403-forbidden/ it tells me this could be either WAF filtering my request (when WAF is not enabled for that API) or it could be a missing host header for a private API (when the API is public).
I think I might be doing one these things wrong:
The syntax used for setting the variable is wrong
Using the variable will make nginx send different headers to API Gateway and I need to intervene manually. I did try setting a Host header already, but it did not make any difference though.
The nginx version in use is 1.17.3
You have the URI /production embedded in the variable, so the :443 is tagged on to the end of the URI rather than the host name. I'm not convinced you need the :443, being the default port for https.
Also, when variables are used in proxy_pass and a URI is specified in the directive, it is passed to the server as is, replacing the original request URI. See this document for details.
You should use rewrite...break to change the URI and remove any optional URI from the proxy_pass statement.
For example:
location /api {
set $apigateway xxxxxxxxxx.execute-api.eu-central-1.amazonaws.com;
rewrite ^/api(.*)$ /production$1 break;
proxy_pass https://$apigateway;
proxy_ssl_server_name on;
}
Also, you will need a resolver statement somewhere in your configuration.
It seems like a false positive at WAF. Did you try to disable AWS WAF https://docs.aws.amazon.com/waf/latest/developerguide/remove-protection.html ?

How to use NGINX as forward proxy for any requested location?

I am trying to configure NGINX as a forward proxy to replace Fiddler which we are using as a forward proxy. The feature of Fiddler that we use allows us to proxy ALL incoming request to a 8888 port. How do I do that with NGINX?
In all examples of NGINX as a reverse proxy I see proxy_pass always defined to a specific upstream/proxied server. How can I configure it so it goes to the requested server, regardless of the server in the same way I am using Fiddler as a forward proxy.
Example:
In my code:
WebProxy proxyObject = new WebProxy("http://mynginxproxyserver:8888/",true);
WebRequest req = WebRequest.Create("http://www.contoso.com");
req.Proxy = proxyObject;
In mynginxproxyserver/nginx.conf I do not want to delegate the proxying to another server (e.g. proxy_pass set to http://someotherproxyserver). Instead I want it to just be a proxy server, and redirect requests from my client (see above) to the request host. That's what Fiddler does when you enable it as a proxy: http://docs.telerik.com/fiddler/Configure-Fiddler/Tasks/UseFiddlerAsReverseProxy
Your code appears to be using a forward proxy (often just "proxy"), not reverse proxy and they operate quite differently. Reverse proxy is for server end and something client doesn't really see or think about. It's to retrieve content from the backend servers and hand to the client. Forward proxy is something the client sets up in order to connect to rest of the internet. In turn, the server may potentially know nothing about your forward proxy.
Nginx is originally designed to be a reverse proxy, and not a forward proxy. But it can still be used as a forward one. That's why you probably couldn't find much configuration for it.
This is more a theory answer as I've never done this myself, but a configuration like following should work.
server {
listen 8888;
location / {
resolver 8.8.8.8; # may or may not be necessary.
proxy_pass http://$http_host$uri$is_args$args;
}
}
This is just the important bits, you'll need to configure the rest.
The idea is that the proxy_pass will pass to a variable host rather than a predefined one. So if you request http://example.com/foo?bar, your http header will include host of example.com. This will make your proxy_pass retrieve data from http://example.com/foo?bar.
The document that you linked is using it as a reverse proxy. It would be equivalent to
proxy_pass http://localhost:80;
You can run into url encoding problems when using the $uri variable as suggested by Grumpy, since it is decoded automatically by nginx. I'd suggest you modify the proxy pass line to
proxy_pass http://$http_host$request_uri;
The variable $request_uri leaves the encoding in tact and also contains all query parameters.

how to use nginx as reverse proxy for cross domains

I need to achieve below test case using nginx:
www.example.com/api/ should redirect to ABC.com/api,
while www.example.com/api/site/login should redirect to XYZ.com/api/site/login
But in the browser, user should only see www.example.com/api.... (and not the redirected URL).
Please let me know how this can be achieved.
The usage of ABC.com is forbidden by stackoverflow rules, so in example config I use domain names ABC.example.com and XYZ.example.com:
server {
...
server_name www.example.com;
...
location /api/ {
proxy_set_header Host ABC.example.com;
proxy_pass http://ABC.example.com;
}
location /api/site/login {
proxy_set_header Host XYZ.example.com;
proxy_pass http://XYZ.example.com;
}
...
}
(replace http:// with https:// if needed)
The order of location directives is of no importance because, as the documentation states, the location with the longest matching prefix is selected.
With the proxy_set_header parameter, nginx will behave exactly in the way you need, and the user will see www.example.com/api... Otherwise, without this parameter, nginx will generate HTTP 301 redirection to ABC.example.com or XYZ.example.com.
You don't need to specify a URI in the proxy_pass parameter because, as the documentation states, if proxy_pass is specified without a URI, the request URI is passed to the server in the same form as sent by a client when the original request is processed.
You can specify your servers ABC.example.com and XYZ.example.com as domain names or as IP addresses. If you specify them as domain names, you need to specify the additional parameter resolver in your server config. You can use your local name server if you have one, or use something external like Google public DNS (8.8.8.8) or DNS provided for you by your ISP:
server {
...
server_name www.example.com;
resolver 8.8.8.8;
...
}
Try this:
location /api {
proxy_pass http://proxiedsite.com/api;
}
When NGINX proxies a request, it sends the request to a specified
proxied server, fetches the response, and sends it back to the client.
It is possible to proxy requests to an HTTP server (another NGINX
server or any other server) or a non-HTTP server (which can run an
application developed with a specific framework, such as PHP or
Python) using a specified protocol. Supported protocols include
FastCGI, uwsgi, SCGI, and memcached.
To pass a request to an HTTP proxied server, the proxy_pass directive
is specified inside a location.
Resource from NGINX Docs

nginx specify server for a particular request

Let's say I have ip_hash; turned on for load balancing between 4 different servers. So, client's IP address is used as a hashing key to determine which server his requests get routed to.
However, for file upload, it's best to keep all files in a single server. So, I want all /upload requests get routed to server 1 for any client. This means all requests obey IP-hash, except POST /upload which must be sent to server 1.
Is there a way to create this exception in NGINX? Thanks!
Define two upstream containers, one with full load balancing and another with the POST specific service requirements:
upstream balancing { ... }
upstream uploading { ... }
Also, within the http container, define a map of the request method:
map $request_method $upstream {
default balancing;
POST uploading;
}
Finally, within the server container, define a specific proxy_pass for the /upload URI:
location / {
proxy_pass http://balancing;
}
location /upload {
proxy_pass http://$upstream;
}
The upstream specification is evaluated from the value of the REQUEST_METHOD.

nginx conditional proxy pass based on header

I'm trying to manage a deployment to servers running behind an nginx plus server configured as a load balancer. The app servers are sent traffic from nginx using the proxy_pass directive, and what I'd like to do is to direct traffic to one upstream by default, but a different one for testing as we deploy to spare instances; I'm trying to select this by having developers set a header in their browser, which nginx then looks for and sets a variable for the relevant proxy.
It seems all to be sensible, but it simply doesn't work - I'm not sure if I misunderstand how it works, but it does seem odd.
The upstreams are configured as
upstream site-cluster {
zone site 64k;
least_conn;
server 10.0.6.100:80 route=a slow_start=30s;
server 10.0.7.100:80 route=b slow_start=30s;
sticky route $route_cookie $route_uri;
}
upstream site-cluster2 {
zone site 64k;
least_conn;
server 10.0.6.30:80 route=a slow_start=30s;
server 10.0.7.187:80 route=b slow_start=30s;
sticky route $route_cookie $route_uri;
}
And then this code is in the location / block.
map $http_x_newsite $proxyurl {
default http://site-cluster;
"true" http://site-cluster2;
}
proxy_pass $proxyurl;
What happens is it's always the default servers which get sent the traffic, irrespective of whether I set the header or not.
Any ideas?
map directive should be in http context not location:
Syntax: map string $variable { ... }
Default: —
Context: http
The rest looks sensible, works for me.

Resources