nginx file upload with client_body_in_file_only - nginx

Good evening,
I need to upload static content to nginx server 1.9 (so upload module didn't work with this version). I've read the article "Nginx direct file upload without passing them through backend" and followed instructions step by step. Everything works for me, except file names at the nginx data directory. File names look like '0000000001', '0061565403' and so on. What should I do to save files with their correct names?
Here is my nginx location config:
location /upload {
limit_except POST { deny all; }
client_body_temp_path /data/;
client_body_in_file_only on;
client_body_buffer_size 128K;
client_max_body_size 50M;
proxy_pass_request_headers on;
proxy_set_header content-type "text/html";
proxy_set_body $request_body_file;
proxy_pass http://localhost:8080/
proxy_redirect off;}

You can use HTTP header in the client to pass the correct name (whatever that is), e.g.:
Correct-Filename: my-correct-filename
And since you're using proxy_pass_request_headers on, the header is visible in the back end where you can use it when saving the file. However when using headers the filename is limited to using ASCII characters, see this answer.

The only way I have been able to do this is to send the original filename as a parameter (I use JS to copy the filename to a hidden field), and then, on the server, if I am storing the temp file to our file system, I use that parameter to rename the file in the process of saving it to its "proper" location.
Not beautiful, but it works.

Related

Nginx reverse proxy prevent changing host for other locations

I’m trying to get this done for a couple of days and I can’t. I’m having two web apps, one production and one testing, let’s say https://production.app and https://testing.app. I need to make a reverse proxy under https://production.app/location that points to https://testing.app/location (at the moment I only need one functionality from the testing env in prod). The configuration I created indeed proxies this exact location, but that functionality also loads resources from /static directory resulting in request to https://production.app/static/xyz.js instead of https://testing.app/static/xyz.js, and /static can’t be proxied. Is there a way to change headers only in this proxied traffic so that it’s https://testing.app/static (and of course any other locations)?
Below is my current config (only directives regarding proxy):
server {
listen 443 ssl;
server_name production.app;
root /var/www/production.app;
location /location/ {
proxy_pass https://testing.app/location/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Have a good day :)
Your question title isn't well written. It has nothing to do about "changing headers", and nginx location, while being essentially tied to the request processing, isn't the right term there too. It should be something like "how to serve the assets requested by proxied page in a different way than the others".
Ok, here we go. First of all, do you really understand what exactly does the following config?
location /location/ {
proxy_pass https://testing.app/location/;
}
The URI prefix /location/ specified after the https://testing.app upstream name has the following effect:
The /location/ prefix specified in the location directive truncated from the processed URI.
Processed UIR is being prepended with the /location/ prefix specified in the proxy_pass directive before being passed to the upstream.
That is, if these prefixes are equal, the following configuration will do the same eliminating those steps (therefore processing the request slightly more efficiently):
location /location/ {
proxy_pass https://testing.app;
}
Understanding this is a key concept to the trick we're going to use.
Since all the requests for static assets made from the /location/ page will have the HTTP Referer header equal to http://production.app/location/ (well, unless you specify no-referer as you page referrer policy, and if you are, the whole trick won't work at all unless you change it), we can rewrite a request URI that matched some conditions (say start with the /static/ or /img/ prefix) redirecting it to some special internal location using the following if block (should be placed at the server configuration level):
if ($http_referer = https://production.app/location/) {
rewrite ^/static/ /internal$uri;
rewrite ^/img/ /internal$uri;
...
}
We can use any prefix, the /internal one used here is only as example, however used prefix should not interfere with your existing routes. Next, we will define that special location using the internal keyword:
location /internal/ {
internal;
proxy_pass https://testing.app/;
}
Trailing slash after the http://testing.app upstream name here is the essential part. It will make the proxied URI returning to its original state, removing the /internal/ prefix added by the rewrite directive earlier and replacing it with a single slash.
You can use this trick for more than one page using regex pattern to match the Referer header value, e.g.
if ($http_referer ~ ^https?://production\.app/(page1|page2|page3)/) {
...
}
You should no try this for anything else but the static assets or it can break the app routing mechanism. This is also only a workaround that should be used for testing purposes only, not something that I can recommend for the production use on a long term basis.
BTW, are you sure you really need that
proxy_set_header Host $host;
line in your location? Do you really understand what does it mean or you are using it just copy-pasting from some other configuration? Check this answer to not be surprised if something goes wrong.

Nginx fancy-index header and footer never load

I'm creating a Nginx file server, and I'm trying to enable the fancy-index module to get have custom header and footer, but I can't get it working, the header/footer never load. (The request isn't even done from the browser).
For now, I've followed this tutorial : https://neilmenon.com/blog/install-nginx-fancyindex
My current config for the site is
server {
listen 80;
server_name myname;
autoindex on;
autoindex_exact_size off;
autoindex_localtime on;
location / {
root /var/www/html
fancyindex on;
fancyindex_exact_size off;
fancyindex_footer /fancy-index/footer.html;
fancyindex_header /fancy-index/header.html;
fancyindex_css_href /fancy-index/style.css;
fancyindex_time_format "%B %e, %Y";
}
}
I've also loaded the module in the nginx.conf on the first line of the file
load_module /usr/share/nginx/modules/ngx_http_fancyindex_module.so;
I also clarify that I am new to nginx, so I apologize if this is a common issue that I should be aware of.
Thanks in advance, any help would be much appreciated
I wasn't able to find a solution to use fancy index however, I've got a workaround by using the module ngx_http_addition_module on which fancy index is based.
This module is here : https://nginx.org/en/docs/http/ngx_http_addition_module.html
Basically, the configuration goes as follows :
location / {
root /var/www/html
addition_types text/html; # Replace this with watever mime type this server is responding
add_before_body /fancy-index/header.html; # Replace the fancyindex_header
add_after_body /fancy-index/footer.html; # Replace the fancyindex_footer
}
You don't have the possibility to link a stylesheet from these directives or changes the time format, but nothing prevent to load a stylesheet from the header and adding a script in it for the time.
I had the same problem. The issue is coming from the fact that you enable autoindex.
To fix the issue you need to comment the line that reference autoindex

nginx hook before upload

I've created an intranet http site where users can upload their files, I have created a location like this one:
location /upload/ {
limit_except POST { deny all; }
client_body_temp_path /home/nginx/tmp;
client_body_in_file_only on;
client_body_buffer_size 1M;
client_max_body_size 10G;
proxy_set_header X-upload /upload/;
proxy_set_header X-File-Name $request_body_file;
proxy_set_body $request_body_file;
proxy_redirect off;
proxy_pass_request_headers on;
proxy_pass http://localhost:8080/;
}
Quite easy as suggested in the official doc. When upload is complete the proxy_pass directive calls the custom URI and makes filesystem operations on newly created temp file.
curl --request POST --data-binary "#myfile.img" http://myhost/upload/
Here's my problem: I need to have some kind of custom hook/operation telling me when the upload begins, something nginx can call before starting the http stream, is there a way to achieve that ? I mean, before uploading big files I need to call a custom url (something like proxy_pass) to inform the server about this upload and execute certain operations.
Is there a way to achieve it ? I have tried with echo-nginx module but it didn't succeed with these http POST (binary form-urlencoded). I don't want to use external scripts to deal with the upload and keep these kind of operations inside nginx (more performant)
Thanks in advance.
Ben
Self replying.
I have found this directive in order to solve my own request.
auth_request <something>
So I can do something like:
location /upload/ {
...
# Pre auth
auth_request /somethingElse/;
...
}
# Newly added section
location /somethingElse/ {
...
proxy_pass ...;
}
This seems to be fine and working, useful for uploads as well as for general auth or basic prechecks

Encode and decode pathname in nginx

Normally files can be accessed at:
http://example.com/cats/cat1.zip
I want to encode/encrypt the pathname (/cats/cat1.zip) so that the link is not normally accessible but accessible after the pathname is encrypted/encoded:
http://example.com/Y2F0cy9jYXQxLnppcAo=
I'm using base64 encoding above for simplicity but would prefer encryption. How do I do about doing this? Do I have to write a custom module?
You can use a Nginx rewrite rule rewrite the url (from encoded to unencoded). And, to apply your encoding logic you can use a custom function (I did it with the perl module).
Could be something like this:
http {
...
perl_modules perl/lib;
...
perl_set $uri_decode 'sub {
my $r = shift;
my $uri = $r->uri;
$uri = perl_magic_to_decode_the_url;
return $uri;
}';
...
server {
...
location /your-protected-urls-regex {
rewrite ^(.*)$ $scheme://$host$uri_decode;
}
If your only concern is limiting access to certain URLs you may take a look at this post on Securing URLs with the Secure Link Module in Nginx.
It provides fairly simple method for securing your files — the most basic and simple way to encrypt your URLs is by using the secure_link_secret directive:
server {
listen 80;
server_name example.com;
location /cats {
secure_link_secret yoursecretkey;
if ($secure_link = "") { return 403; }
rewrite ^ /secure/$secure_link;
}
location /secure {
internal;
root /path/to/secret/files;
}
}
The URL to access cat1.zip file will be http://example.com/cats/80e2dfecb5f54513ad4e2e6217d36fd4/cat1.zip where 80e2dfecb5f54513ad4e2e6217d36fd4 is the MD5 hash computed on a text string that concatenates two elements:
The part of the URL that follows the hash, in our case cat1.zip
The parameter to the secure_link_secret directive, in this case yoursecretkey
The above example also assumes the files accessible via the encrypted URLs are stored at /path/to/secret/files/secure directory.
Additionally, there is a more flexible, but also more complex method, for securing URLs with ngx_http_secure_link_module module by using the secure_link and secure_link_md5 directives, to limit the URL access by IP address, define expiration time of the URLs etc.
If you need to completely obscure your URLs (including the cat1.zip part), you'll need to make a decision between:
Handling the decryption of the encrypted URL on the Nginx side — writing your own, or reusing a module written by someone else
Handling the decryption of the encrypted URL somewhere in your application — basically using Nginx to proxy your encrypted URLs to your application where you decrypt them and act accordingly, as #cnst writes above.
Both approaches have pros and cons, but IMO the latter one is simpler and more flexible — once you set up your proxy you don't need to worry much about Nginx, nor need to compile it with some special prerequisites; no need to write or compile code in language other than what you already are writing in your application (unless your application includes code in C, Lua or Perl).
Here's an example of a simple Nginx/Express application where you'd handle the decryption within your application. The Nginx configuration might look like:
server {
listen 80;
server_name example.com;
location /cats {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:8000;
}
location /path/to/secured/files {
internal;
}
}
and on the application (Node.js/Express) side you may have something like:
const express = require ('express');
const app = express();
app.get('/cats/:encrypted', function(req, res) {
const encrypted = req.params.encrypted;
//
// Your decryption logic here
//
const decryptedFileName = decryptionFunction(encrypted);
if (decryptedFileName) {
res.set('X-Accel-Redirect', `/path/to/secured/files/${decryptedFileName}`);
} else {
// return error
}
});
app.listen(8000);
The above example assumes that the secured files are located at /path/to/secured/files directory. Also it assumes that if the URL is accessible (properly encrypted) you are sending the files for download, but the same logic would apply if you need to do something else.
The easiest way would be to write a simple backend (with interfacing through proxy_pass, for example) that would decrypt the filename from the $uri, and provide the results within the X-Accel-Redirect response header (which is subject to proxy_ignore_headers in nginx), which would subsequently be subject to an internal redirect within nginx (to a location that cannot be accessed without going through the backend first), and served with all the optimisations that are already part of nginx.
location /sec/ {
proxy_pass http://decryptor/;
}
location /x-accel-redirect-here/ {
internal;
alias …;
}
The above approach follows the ‘microservices’ architecture, in that your decryptor service's only job is to perform decryption and access control, leaving it up to nginx to ensure files are served correctly and in the most efficient way possible through the use of the internal specially-treated X-Accel-Redirect HTTP response header.
Consider using something like OpenResty with Lua.
Lua can do almost everything you want in nginx.
https://openresty.org/
https://github.com/openresty/
==== UPDATE ====
Now we have njs https://nginx.org/en/docs/njs/ , javascript for nginx, it can also do almost everything.
If anyone bumps into the problem of unescaping query arguments in Nginx, this is how I solved it on a standard Nginx setup on Ubuntu 20.04 (the query argument in the url is foo as in https://some.domain.com/some/path?foo=some%2Fquery%2Fvalue):
perl_modules perl/lib;
perl_set $arg_foo_decoded 'sub {
my $r = shift;
my $arg_foo = $r->variable("arg_foo");
$arg_foo =~ s/\+/ /ig;
my $arg_foo_decoded = $r->unescape($arg_foo);
return $arg_foo_decoded;
}';
I could then use the $arg_foo_decoded variable in my location blocks. It now contains some/query/value.
Edit
Added a line to ensure compatibility with form values containing spaces encoded as + (see also: https://www.w3.org/TR/html401/interact/forms.html#h-17.13.4.1).

best way to save nginx request as a file?

i am looking for a solution to save data sent via http (e.g. as a POST) as quickly as possible (with lowest overhead) via nginx (v1.2.9). i tried the following nginx configuration, but am not seeing any files written in the directory:
server {
listen 9199;
location /saveme {
client_body_in_file_only on;
client_body_temp_path /tmp/bodies;
}
}
what am i doing wrong? and/or is there a better way to accomplish this? (the data that is written should ideally be one file per request, and it does not matter if it is fairly "raw" in nature. post-processing of the files will be done via a separate process via a queue.)
This question has already been answered here:
Basically, you need to combine log_format and fastcgi_pass. You can then use the access_log directive for example, to specify where the saved variable should be dumped to.
location = /saveme {
log_format postdata $request_body;
access_log /var/log/nginx/postdata.log postdata;
fastcgi_pass php_cgi;
}
It could also work with your method but I think you're missing client_body_buffer_size and `client_max_body_size
Do you mean save cache for HTTP post while someone access and request file and store on hdd rather than memory?
I may suggest use proxy_cache_path and proxy_cache. The proxy_cache_path directive sets the path and configuration of the cache, and the proxy_cache directive activates it.
proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g
inactive=60m use_temp_path=off;
server {
...
location / {
proxy_cache my_cache;
proxy_pass http://my_upstream;
}
}
The local disk directory for the cache is called /path/to/cache
levels sets up a two‑level directory hierarchy under /path/to/cache/
keys_zone sets up a shared memory zone for storing the cache keys and metadata such as usage timers
max_size sets the upper limit of the size of the cache
inactive specifies how long an item can remain in the cache without being accessed
the proxy_cache directive activates caching of all content that matches the URL of the parent location block (in the example, /). You can also include the proxy_cache directive in a server block; it applies to all location blocks for the server that don’t have their own proxy_cache directive.

Resources