Turning of gzip on nginx for some uris - nginx

I have this nodejs app behind nginx, let's call it A. Some requests to the app are proxied to another nginx/nodejs server (this is B, a backend API) by the nodejs app (not by nginx).
Currently, both nginx enable gzip compression, which is an issue. Responses from B to A are compressed, and A compresses them again, making the response unusable by the browser.
The ideal solution would be to tell A not to compress responses for requests transferred to B. Unfortunately, I can't get it to work. Here's my nginx config on A:
gzip on;
gzip_types application/json;
location / {
if ($request ~ /api/) {
gzip off;
}
# proxy to the node app
proxy_redirect off;
proxy_pass http://wtb-backoffice;
}
With that setup, proxied requests are still double-compressed. If I replace gzip off with return 404, surely my /api/smth requests return a 404, so the condition is right.
If I turn off compression on server B and enable it on A without the location condition to turn it off, the content is compressed once, and thus readable - which makes sense. But with the gzip off condition present, responses are received uncompressed.
So I conclude that the gzip off directive works only when the raw content is uncompressed. Otherwise it will stupidly compress it again. Does it make sense for someone, and how can I fix that?
BTW, before you suggest stopping proxying or using nginx for the proxy: A and B do not use the same authentication mechanisms, so the proxy controller on A does some magic and cannot be bypassed.

Related

nginx ignore missing HTTP headers

Is there any way to tell nginx to ignore missing HTTP headers when proxying requests?
There is an existing proprietary HTTP Server sending requests without any header. The Server can not be configured. I need various endpoints from this server in a web application. Therefore I want to setup my nginx to proxy requests to this server. I have location configuration in my regular server.
location /api/ {
proxy_pass http://localhost:80/;
}
When calling corresponding URIs nginx complains:
upstream sent no valid HTTP/1.0 header while reading response header from upstream, client: 127.0.0.1, server: localhost, request: ....
Is there any way to tell nginx not to expect any headers, and just to forward the received payload?
Thanks for your help!
Kind regards,
Andreas
Edit: Ok, found server is using http 0.9, when calling curl directly to the server threw an error:
curl: (1) Received HTTP/0.9 when not allowed
Using the option --http0.9 got the desired result. (Which is received in a browser without further ado). Any chance to tell nginx to proxy to an http 0.9 server?
You can configure nginx to ignore missing headers by setting the proxy_ignore_headers directive to "X-Accel-Expires" and "Expires". This will tell nginx to ignore those specific headers and not expect them to be present when proxying requests.
You can add this to the location block in your nginx configuration:
proxy_ignore_headers X-Accel-Expires Expires;
Additionally, proxy_pass_request_headers off; could be used, this tells nginx to ignore all headers passed to the proxied server.
As for the payload, as long as the underlying protocol is HTTP, it should be fine, as the payload will be sent in the body of the HTTP request.
So your location block could look like this
location /api/ {
proxy_pass http://localhost:80/;
proxy_pass_request_headers off;
}
Please note that this is may not be recommended, as the missing headers may contain important information like authentication and should be handled correctly.

Nginx proxy_pass changes behavior when defining the target in a variable

I'm reverse proxying an AWS API Gateway stage using nginx. This is pretty straightforward:
location /api {
proxy_pass https://xxxxxxxxxx.execute-api.eu-central-1.amazonaws.com:443/production;
proxy_ssl_server_name on;
}
However, this approach will make nginx serve a stale upstream when the DNS entry for xxxxxxxxxx.execute-api.eu-central-1.amazonaws.com changes as it's resolving the entry once on startup.
Following this article: https://www.nginx.com/blog/dns-service-discovery-nginx-plus/ I am now trying to define my proxy target in a variable like this:
location /api {
set $apigateway xxxxxxxxxx.execute-api.eu-central-1.amazonaws.com/production;
proxy_pass https://$apigateway:443/;
proxy_ssl_server_name on;
}
This will make API Gateway respond with a ForbiddenException: Forbidden to requests that would pass the previous setup without using a variable. Reading this document: https://aws.amazon.com/de/premiumsupport/knowledge-center/api-gateway-troubleshoot-403-forbidden/ it tells me this could be either WAF filtering my request (when WAF is not enabled for that API) or it could be a missing host header for a private API (when the API is public).
I think I might be doing one these things wrong:
The syntax used for setting the variable is wrong
Using the variable will make nginx send different headers to API Gateway and I need to intervene manually. I did try setting a Host header already, but it did not make any difference though.
The nginx version in use is 1.17.3
You have the URI /production embedded in the variable, so the :443 is tagged on to the end of the URI rather than the host name. I'm not convinced you need the :443, being the default port for https.
Also, when variables are used in proxy_pass and a URI is specified in the directive, it is passed to the server as is, replacing the original request URI. See this document for details.
You should use rewrite...break to change the URI and remove any optional URI from the proxy_pass statement.
For example:
location /api {
set $apigateway xxxxxxxxxx.execute-api.eu-central-1.amazonaws.com;
rewrite ^/api(.*)$ /production$1 break;
proxy_pass https://$apigateway;
proxy_ssl_server_name on;
}
Also, you will need a resolver statement somewhere in your configuration.
It seems like a false positive at WAF. Did you try to disable AWS WAF https://docs.aws.amazon.com/waf/latest/developerguide/remove-protection.html ?

How to cache the gzip content in nginx?

when several clients request the same file, which is responsed by the gzip function in nginx . I hope that other responses could use the cached gzip content . How to config ?
There was a discussion of the same in NGINX forums.
I find that this suggestion makes the most sense. However, it mostly applies to when you do proxy with NGINX and not fastcgi cache.
Essentially you will ensure Accept-Encoding: gzip is sent to your backend to ensure that you always generate/cache gzipped content, and then use gunzip module for clients that don't request gzip encoding.

How to use NGINX as forward proxy for any requested location?

I am trying to configure NGINX as a forward proxy to replace Fiddler which we are using as a forward proxy. The feature of Fiddler that we use allows us to proxy ALL incoming request to a 8888 port. How do I do that with NGINX?
In all examples of NGINX as a reverse proxy I see proxy_pass always defined to a specific upstream/proxied server. How can I configure it so it goes to the requested server, regardless of the server in the same way I am using Fiddler as a forward proxy.
Example:
In my code:
WebProxy proxyObject = new WebProxy("http://mynginxproxyserver:8888/",true);
WebRequest req = WebRequest.Create("http://www.contoso.com");
req.Proxy = proxyObject;
In mynginxproxyserver/nginx.conf I do not want to delegate the proxying to another server (e.g. proxy_pass set to http://someotherproxyserver). Instead I want it to just be a proxy server, and redirect requests from my client (see above) to the request host. That's what Fiddler does when you enable it as a proxy: http://docs.telerik.com/fiddler/Configure-Fiddler/Tasks/UseFiddlerAsReverseProxy
Your code appears to be using a forward proxy (often just "proxy"), not reverse proxy and they operate quite differently. Reverse proxy is for server end and something client doesn't really see or think about. It's to retrieve content from the backend servers and hand to the client. Forward proxy is something the client sets up in order to connect to rest of the internet. In turn, the server may potentially know nothing about your forward proxy.
Nginx is originally designed to be a reverse proxy, and not a forward proxy. But it can still be used as a forward one. That's why you probably couldn't find much configuration for it.
This is more a theory answer as I've never done this myself, but a configuration like following should work.
server {
listen 8888;
location / {
resolver 8.8.8.8; # may or may not be necessary.
proxy_pass http://$http_host$uri$is_args$args;
}
}
This is just the important bits, you'll need to configure the rest.
The idea is that the proxy_pass will pass to a variable host rather than a predefined one. So if you request http://example.com/foo?bar, your http header will include host of example.com. This will make your proxy_pass retrieve data from http://example.com/foo?bar.
The document that you linked is using it as a reverse proxy. It would be equivalent to
proxy_pass http://localhost:80;
You can run into url encoding problems when using the $uri variable as suggested by Grumpy, since it is decoded automatically by nginx. I'd suggest you modify the proxy pass line to
proxy_pass http://$http_host$request_uri;
The variable $request_uri leaves the encoding in tact and also contains all query parameters.

How to fix Sinatra redirecting https to http under nginx

I have a Sinatra app running in nginx (using thin as a back-proxy) and I'm using redirect '/<path>' statements in Sinatra. However, when I access the site under https, those redirects send me to http://localhost/<path> rather than to https://localhost/<path> as they should.
Currently, nginx passes control to thin with this command proxy_pass http://thin_cluster, where thin_cluster is
upstream thin_cluster { server unix:/tmp/thin.cct.0.sock; }
How can I fix this?
In order for Sinatra to correctly assemble the url used for redirects, it needs to be able to determine whether the request is using ssl, so that the redirect can be made using http or https as appropriate.
Obviously the actual call to thin isn't using ssl, as this is being handled by the front end web server, and the proxied request is in the clear. We therefore need a way to tell Sinatra that it should treat the request as secure, even though it isn't actually using ssl.
Ultimately the code that determines whether the request should be treated as secure is in the Rack::Request#ssl? and Rack::Request#scheme methods. The scheme methods examines the env hash to see if one of a number of entries are present. One of these is HTTP_X_FORWARDED_PROTO which corresponds to the X-Forwarded-Proto HTTP header. If this is set, then the value is used as the protocol scheme (http or https).
So if we add this HTTP header to the request when it is proxied from nginx to the back end, Sinatra will be able to correctly determine when to redirect to https. In nginx we can add headers to proxied requests with proxy_set_header, and the scheme is available in the $scheme variable.
So adding the line
proxy_set_header X-Forwarded-Proto $scheme;
to the nginx configuration after the proxy_pass line should make it work.
You can force all links to go to https in the nginx layer.
in nginx.conf:
server{
listen 80;
server_name example.com;
rewrite ^(.*) https://$server_name$1 redirect;
}
This is good to have too to assure that your requests are always https

Resources