Normally files can be accessed at:
http://example.com/cats/cat1.zip
I want to encode/encrypt the pathname (/cats/cat1.zip) so that the link is not normally accessible but accessible after the pathname is encrypted/encoded:
http://example.com/Y2F0cy9jYXQxLnppcAo=
I'm using base64 encoding above for simplicity but would prefer encryption. How do I do about doing this? Do I have to write a custom module?
You can use a Nginx rewrite rule rewrite the url (from encoded to unencoded). And, to apply your encoding logic you can use a custom function (I did it with the perl module).
Could be something like this:
http {
...
perl_modules perl/lib;
...
perl_set $uri_decode 'sub {
my $r = shift;
my $uri = $r->uri;
$uri = perl_magic_to_decode_the_url;
return $uri;
}';
...
server {
...
location /your-protected-urls-regex {
rewrite ^(.*)$ $scheme://$host$uri_decode;
}
If your only concern is limiting access to certain URLs you may take a look at this post on Securing URLs with the Secure Link Module in Nginx.
It provides fairly simple method for securing your files — the most basic and simple way to encrypt your URLs is by using the secure_link_secret directive:
server {
listen 80;
server_name example.com;
location /cats {
secure_link_secret yoursecretkey;
if ($secure_link = "") { return 403; }
rewrite ^ /secure/$secure_link;
}
location /secure {
internal;
root /path/to/secret/files;
}
}
The URL to access cat1.zip file will be http://example.com/cats/80e2dfecb5f54513ad4e2e6217d36fd4/cat1.zip where 80e2dfecb5f54513ad4e2e6217d36fd4 is the MD5 hash computed on a text string that concatenates two elements:
The part of the URL that follows the hash, in our case cat1.zip
The parameter to the secure_link_secret directive, in this case yoursecretkey
The above example also assumes the files accessible via the encrypted URLs are stored at /path/to/secret/files/secure directory.
Additionally, there is a more flexible, but also more complex method, for securing URLs with ngx_http_secure_link_module module by using the secure_link and secure_link_md5 directives, to limit the URL access by IP address, define expiration time of the URLs etc.
If you need to completely obscure your URLs (including the cat1.zip part), you'll need to make a decision between:
Handling the decryption of the encrypted URL on the Nginx side — writing your own, or reusing a module written by someone else
Handling the decryption of the encrypted URL somewhere in your application — basically using Nginx to proxy your encrypted URLs to your application where you decrypt them and act accordingly, as #cnst writes above.
Both approaches have pros and cons, but IMO the latter one is simpler and more flexible — once you set up your proxy you don't need to worry much about Nginx, nor need to compile it with some special prerequisites; no need to write or compile code in language other than what you already are writing in your application (unless your application includes code in C, Lua or Perl).
Here's an example of a simple Nginx/Express application where you'd handle the decryption within your application. The Nginx configuration might look like:
server {
listen 80;
server_name example.com;
location /cats {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:8000;
}
location /path/to/secured/files {
internal;
}
}
and on the application (Node.js/Express) side you may have something like:
const express = require ('express');
const app = express();
app.get('/cats/:encrypted', function(req, res) {
const encrypted = req.params.encrypted;
//
// Your decryption logic here
//
const decryptedFileName = decryptionFunction(encrypted);
if (decryptedFileName) {
res.set('X-Accel-Redirect', `/path/to/secured/files/${decryptedFileName}`);
} else {
// return error
}
});
app.listen(8000);
The above example assumes that the secured files are located at /path/to/secured/files directory. Also it assumes that if the URL is accessible (properly encrypted) you are sending the files for download, but the same logic would apply if you need to do something else.
The easiest way would be to write a simple backend (with interfacing through proxy_pass, for example) that would decrypt the filename from the $uri, and provide the results within the X-Accel-Redirect response header (which is subject to proxy_ignore_headers in nginx), which would subsequently be subject to an internal redirect within nginx (to a location that cannot be accessed without going through the backend first), and served with all the optimisations that are already part of nginx.
location /sec/ {
proxy_pass http://decryptor/;
}
location /x-accel-redirect-here/ {
internal;
alias …;
}
The above approach follows the ‘microservices’ architecture, in that your decryptor service's only job is to perform decryption and access control, leaving it up to nginx to ensure files are served correctly and in the most efficient way possible through the use of the internal specially-treated X-Accel-Redirect HTTP response header.
Consider using something like OpenResty with Lua.
Lua can do almost everything you want in nginx.
https://openresty.org/
https://github.com/openresty/
==== UPDATE ====
Now we have njs https://nginx.org/en/docs/njs/ , javascript for nginx, it can also do almost everything.
If anyone bumps into the problem of unescaping query arguments in Nginx, this is how I solved it on a standard Nginx setup on Ubuntu 20.04 (the query argument in the url is foo as in https://some.domain.com/some/path?foo=some%2Fquery%2Fvalue):
perl_modules perl/lib;
perl_set $arg_foo_decoded 'sub {
my $r = shift;
my $arg_foo = $r->variable("arg_foo");
$arg_foo =~ s/\+/ /ig;
my $arg_foo_decoded = $r->unescape($arg_foo);
return $arg_foo_decoded;
}';
I could then use the $arg_foo_decoded variable in my location blocks. It now contains some/query/value.
Edit
Added a line to ensure compatibility with form values containing spaces encoded as + (see also: https://www.w3.org/TR/html401/interact/forms.html#h-17.13.4.1).
Related
I’m trying to get this done for a couple of days and I can’t. I’m having two web apps, one production and one testing, let’s say https://production.app and https://testing.app. I need to make a reverse proxy under https://production.app/location that points to https://testing.app/location (at the moment I only need one functionality from the testing env in prod). The configuration I created indeed proxies this exact location, but that functionality also loads resources from /static directory resulting in request to https://production.app/static/xyz.js instead of https://testing.app/static/xyz.js, and /static can’t be proxied. Is there a way to change headers only in this proxied traffic so that it’s https://testing.app/static (and of course any other locations)?
Below is my current config (only directives regarding proxy):
server {
listen 443 ssl;
server_name production.app;
root /var/www/production.app;
location /location/ {
proxy_pass https://testing.app/location/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Have a good day :)
Your question title isn't well written. It has nothing to do about "changing headers", and nginx location, while being essentially tied to the request processing, isn't the right term there too. It should be something like "how to serve the assets requested by proxied page in a different way than the others".
Ok, here we go. First of all, do you really understand what exactly does the following config?
location /location/ {
proxy_pass https://testing.app/location/;
}
The URI prefix /location/ specified after the https://testing.app upstream name has the following effect:
The /location/ prefix specified in the location directive truncated from the processed URI.
Processed UIR is being prepended with the /location/ prefix specified in the proxy_pass directive before being passed to the upstream.
That is, if these prefixes are equal, the following configuration will do the same eliminating those steps (therefore processing the request slightly more efficiently):
location /location/ {
proxy_pass https://testing.app;
}
Understanding this is a key concept to the trick we're going to use.
Since all the requests for static assets made from the /location/ page will have the HTTP Referer header equal to http://production.app/location/ (well, unless you specify no-referer as you page referrer policy, and if you are, the whole trick won't work at all unless you change it), we can rewrite a request URI that matched some conditions (say start with the /static/ or /img/ prefix) redirecting it to some special internal location using the following if block (should be placed at the server configuration level):
if ($http_referer = https://production.app/location/) {
rewrite ^/static/ /internal$uri;
rewrite ^/img/ /internal$uri;
...
}
We can use any prefix, the /internal one used here is only as example, however used prefix should not interfere with your existing routes. Next, we will define that special location using the internal keyword:
location /internal/ {
internal;
proxy_pass https://testing.app/;
}
Trailing slash after the http://testing.app upstream name here is the essential part. It will make the proxied URI returning to its original state, removing the /internal/ prefix added by the rewrite directive earlier and replacing it with a single slash.
You can use this trick for more than one page using regex pattern to match the Referer header value, e.g.
if ($http_referer ~ ^https?://production\.app/(page1|page2|page3)/) {
...
}
You should no try this for anything else but the static assets or it can break the app routing mechanism. This is also only a workaround that should be used for testing purposes only, not something that I can recommend for the production use on a long term basis.
BTW, are you sure you really need that
proxy_set_header Host $host;
line in your location? Do you really understand what does it mean or you are using it just copy-pasting from some other configuration? Check this answer to not be surprised if something goes wrong.
Is it possible to configure NGINX to something like multiple reverse-proxy? So, instead of one proxy_pass:
location /some/path/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:8000;
}
to have multiple proxy_passs (relays)? I need something like load balancer but to send not to one of them, but TO ALL (see the scheme below).
TO BE ACCURATE: IT'S NOT REVERSE PROXY AND IT IS NOT LOAD BALANCING AS WELL.
The response may be retrieving from any of them, maybe the last one or even to be "hardcoded" with some configuration directives - it does not matter (it's enough to be HTTP 200 and will be ignored)... So, the scheme should look like:
.----> server 1
/
<---> NGINX <-----> server 2 (response from 2nd server, but it maybe any of them!)
\
`----> server 3
Maybe some extension for NGINX? Is it possible at all and how to do it if it is?
Yes, it is possible.
What you are searching for is called mirroring. And nginx implements it since version 1.13.4, see the directive mirror for more info.
Example:
location = /mirror1 {
internal;
proxy_pass http://mirror1.backend$request_uri;
}
location = /mirror2 {
internal;
proxy_pass http://mirror2.backend$request_uri;
}
...
location /some/path/ {
mirror /mirror1;
mirror /mirror2;
proxy_pass http://primary.backend;
}
(you can also specify it for whole server (or even http) and disable for locations where you don't need it).
Alternatively you could try post_action (but this is undocumented feature and if I remember correctly is deprecated).
we're using nginx to provide a security layer in front of an AWS presto cluster. We registered an SSL certificate for nginx-presto-cluster.our-domain.com
Requests for presto are passed through nginx with Basic authentication.
An SQL query for presto results in multipe sequential requests to the server, to fetch query results.
We created a nginx.conf that looks like this:
location / {
auth_basic $auth;
auth_basic_user_file /etc/nginx/.htpasswd;
sub_filter_types *;
sub_filter_once off;
sub_filter 'http://localhost:8889/' 'https://presto.nginx-presto-cluster.our-domain.com/';
proxy_pass http://localhost:8889/;
}
Presto's responses contain a nextUri to fetch results. The sub_filter rewrites these Uri's from localhost:8889 to our secure domain, where they are passed through nginx again.
The problem:
The first response has a body that looks exactly as desired:
{
"id":"20171123_104423_00092_u7hmr"
, ...
,"nextUri":"https://presto.nginx-presto-cluster.our-domain.com/v1/statement/20171123_104423_00092_u7hmr/1"
, ...
}
The second request, however, looks like:
{
"id":"20171123_105250_00097_u7hmr"
, ...
, "nextUri":"http://localhost:8889/v1/statement/20171123_105250_00097_u7hmr/2"
, ...
}
We would've expected the rewrite to work always the same way.
Can you help us?
We resolved the issue by adding
proxy_set_header Accept-Encoding "";
into the config snippet above.
The reason is that the traffic might be compressed in the process, if this is enabled.
The string replacement then would not work on the compressed content.
By not accepting any encoding we prevent this compression.
I have the following in my config as a reverse proxy for images:
location ~ ^/image/(.+) {
proxy_pass http://example.com/$1;
}
The problem is that not all images will be example.com images and so we need to pass in the full url. If I try:
location ~ ^/image/(.+) {
proxy_pass $1;
}
I get an error:
invalid URL prefix in "https:/somethingelse.com/someimage.png"
The question is quite vague, but, based on the error message, what you're trying to do is perform a proxy_pass entirely based on the user input, by using the complete URL specified after the /image/ prefix of the URI.
Basically, this is a very bad idea, as you're opening yourself to become an open proxy. However, the reason it doesn't work as in the conf you supplied is due to URL normalisation, which, in your case, compacts http://example into http:/example (double slash becomes single), which is different in the context of proxy_pass.
If you don't care about security, you can just change merge_slashes from the default of on to off:
merge_slashes off;
location …
Another possibility is to somewhat related to nginx proxy_pass and URL decoding
location ~ ^/image/.+ {
rewrite ^ $request_uri;
rewrite ^/image/(.*) $1 break;
return 400;
proxy_pass $uri; # will result in an open-proxy, don't try at home
}
The proper solution would be to implement a whitelist, possibly with the help of map or even prefix-based location directives:
location ~ ^/image/(http):/(upload.example.org)/(.*) {
proxy_pass $1://$2/$3;
}
Do note that, as per the explanation in the begginning, the location above is subject to the merge_slash setting, so, it'll never have the double // by default, hence the need to add the double // manually at the proxy_pass stage.
I would use a map in this case
map $request_uri $proxied_url {
# if you don't care about domain and file extension
~*/image/(https?)://?(.*) $1://$2;
# if you want to limit file extension
~*/image/(https?)://?(.*\.(png|jpg|jpeg|ico))$ $1://$2;
# if you want to limit file extension and domain
~*/image/(https?)://?(abc\.xyz\.com/)(.*\.(png|jpg|jpeg|ico))$ $1://$2$3;
default "/404";
}
Then in your proxy pass part you would use something like below
location /image/ {
proxy_pass $proxied_url;
}
I have given three different example depending how you want to handle it
I have nginx location directive which purpose is to "remove" localization prefix from the URI for the proxy_pass directive.
For example, to make URI http://example.com/en/lalala use proxy_pass http://example.com/lalala
location ~ '^/(?<locale>[\w]{2})(/(?<rest>.*))?$' {
...
proxy_pass http://example/$rest;
...
}
This way the rest variable will be decoded when passed to proxy_pass directeve. It seems to be an expected behavior.
The problem is when my URI contains encoded space %20 passed from client
http://example.com/lala%20lala
nginx decodes URI to
http://example.com/lala lala
I can see it in my error.log.
The question is - is it possible do use encoded rest variable somehow as it is passed from client?
If I am doing something completely wrong, please, suggest the right way.
Thank you.
Yes, this behaviour is expected although docs also say:
If proxy_pass is specified without a URI, the request URI is passed to the server in the same form as sent by a client when the original request is processed, or the full normalized request URI is passed when processing the changed URI:
location /some/path/ {
proxy_pass http://127.0.0.1;
}
Nginx engineers say the same: https://serverfault.com/questions/459369/disabling-url-decoding-in-nginx-proxy
However if you append $request_uri to proxy_pass (and strip locale beforehand it may work as said by Nginx engineer):
set $modified_uri $request_uri;
if ($modified_uri ~ "^/([\w]{2})(/.*)") {
set $modified_uri $1;
}
proxy_pass http://example$modified_uri;
I have had some success using the following with Confluence and other Atlassian applications behind nginx where special characters such as ( ) < > [ ] were causing issues.
location /path {
# [... other proxy options ...]
# set proxy path with regex
if ($request_uri ~* "/path(/.*)") {
proxy_pass http://server:port/path$1;
break;
}
# fallback (probably not needed)
proxy_pass http://server:port/path;
}
The set directive can do the trick. It keeps the encoding intact or rather encodes decoded string.
location ~ '^/(?<locale>[\w]{2})(/(?<rest>.*))?$' {
...
set $encoded_rest $rest
proxy_pass http://example/$encoded_rest;
...
}