Nginx Content disposion - Trim file name - nginx

I will briefly tell my problem, I need to trim the first 14 characters (0-9 and -), when downloading a file, it is just about header Content-Disposition, how to achieve something like that?
I want the file to look like this:
1235467890123-FileName.txt
for such
FileName.txt
Config file:
upstream oxide_io {
ip_hash;
server 127.0.0.1:xxx;
server 127.0.0.1:xxx;
server 127.0.0.1:xxx;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name oxidepolska.pl;
error_log /var/log/nginx/forum-error.log error;
ssl ***
proxy_hide_header X-Powered-By;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
gzip ***
location #oxide {
proxy_pass http://oxide_io;
}
location ~ ^/assets/(.*) {
root /var/www/forum/;
try_files /build/public/$1 /public/$1 #oxide;
}
location /plugins/ {
root /var/www/forum/build/public/;
try_files $uri #oxide;
}
location ~ ^/public/uploads/files/(.*)$ {
root /var/www/forum/public/uploads/files/;
add_header Content-Disposition 'inline; filename="$1"';
}
location / {
proxy_pass http://oxide_io;
}
error_page 502 503 /503.html;
location = /503.html {
root /var/www/forum/public/;
}
}
server {
if ($host = oxidepolska.pl) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name oxidepolska.pl;
}

As I finally understand, you are trying to get rid of timestamp prefix added to files uploaded on some forum? Here is the another config, manually adding Content-Disposition header:
map $uri $content_disposition {
'~/\d{13}-([^/]+)$' 'attachment; filename="$1"';
}
server {
...
location <uploaded_files_location> {
add_header Content-Disposition $content_disposition;
}
...
}
When requested file is not matching the pattern \d{13}- (13 digits and "-" sign before the rest of filename), $content_disposition variable evaluated as empty string, so Content-Disposition header is not added to response.

➜ ~ curl -I https://ddl.oxidepolska.pl/1544820059040-lootconfig.zip
HTTP/2 200
server: nginx
date: Fri, 14 Dec 2018 21:54:56 GMT
content-type: application/zip
content-length: 7138
last-modified: Fri, 14 Dec 2018 20:40:59 GMT
etag: "5c14155b-1be2"
accept-ranges: bytes

Disclaimer: This answer is about modifying existing Content-Disposition header, which is not what required by OP.
map $upstream_http_content_disposition $content_disposition {
default $upstream_http_content_disposition;
'~^(.*filename="?)[-\d]{14}(.+?)("?$)' $1$2$3;
}
server {
...
location #oxide {
proxy_pass http://oxide_io;
proxy_hide_header Content-Disposition;
add_header Content-Disposition $content_disposition;
}
...
location / {
proxy_pass http://oxide_io;
proxy_hide_header Content-Disposition;
add_header Content-Disposition $content_disposition;
}
...
}
(assuming first 14 characters can be only 0-9 and '-')

Related

Nginx empty Accept-Encoding leads to timeout

When I want to replace a string in a Nginx reverse proxy I use
sub_filter 'https://upstream.com' 'http://www.localhost:8080';
sub_filter_once off;
But it doesn't work (because its gzipped) so I add
proxy_set_header Accept-Encoding "";
But then the request times out (gateway timeout). I wonder how this could be possible, I tried to do a GET to upstream.com with postman with Accept-Encoding header set to empty and it worked fine, so it is an issue on my side.
My config looks like:
worker_processes 1;
events {
worker_connections 1024;
}
http {
server {
listen 8080;
server_name localhost;
location / {
proxy_pass https://www.upstream.com;
proxy_hide_header 'x-frame-options';
proxy_cookie_domain ~^(.*)$ "http://www.localhost:8080";
proxy_set_header X-Real-IP $remote_addr;
proxy_cookie_path / "/; secure; HttpOnly; SameSite=none";
sub_filter 'https://www.upstream.com' 'http://www.localhost:8080';
sub_filter_once off;
sub_filter_types text/html;
proxy_set_header Accept-Encoding "";
}
}
}
PS: upstream.com is just an example, I use another URL.

How to add Cache-Control header in Nginx for a specific folder and every file and folder beneath it?

I want to disable all caching for a specific folder and all it's descendants in my application. The folder has some files in the root and also other subdirectories.
printscreen of my folder in Windows explorer
This is what I tried to far in my nginx configuration file.
I added two map directives and used them in my server directive.
If I check the response headers for any file with curl like so
curl -I http://example.com/api/folder/index.html
I don't see the added Cache-Control header.
This is my nginx file:
events {
}
http {
include /etc/nginx/mime.types;
map $uri $cache_control {
/api/folder/(l|m|r|s)/ "no-cache, no-store, must-revalidate";
}
map $uri $expire {
/api/folder/(l|m|r|s)/ off;
}
upstream api {
server api:80;
}
upstream web {
server web:80;
}
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
expires $expire;
add_header Cache-Control $cache_control;
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
}
location /swagger {
proxy_pass http://api;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
}
location / {
proxy_pass http://web;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
}
}
}

Nginx try root and then proxy backend

I am trying to get nginx to first look in the root folder, and then if not found look in the (nodejs) proxy backend, but it's failing to look for static files not found in the root.
I assume that the try_files $uri #backend; entry will first attempt to serve the files in root, and if that fails try in backend ??
If I connect directly to port 3030 then it does serve the files, so the problem seems to be in the nginx config ?
server {
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 http2;
server_name somesite.com;
root /usr/share/nodejs/public;
location / {
try_files $uri #backend;
}
location ~ .(?:ico|jpg|css|png|js|swf|woff|eot|svg|ttf|html|gif)$ {
add_header Pragma "public";
add_header Cache-Control "public";
expires 30d;
}
location #backend {
proxy_pass http://localhost:3030;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_cache_valid 502 5s;
}
location = /444-response {return 444;}
}

Reverse proxy to port 8069 on Engintron issues while it works on standard NGINX setup

I have an Odoo app running on port 8069, and while this setup worked fine in my old server, my new server is using Engintron which seems to have a different method of working with vhosts. The standout issue is that under common_http.conf, this line becomes a duplicate of the vhost needed to run the app but is included in the automatically generated config that gets overridden whenever a new cpanel account is created, deleted, or when Engintron is updated.
What would be the correct way of setting this up properly within Engintron?
common_http.conf
location / {
try_files $uri $uri/ #backend;
}
# This location / ends up getting included in the custom
# vhost which is needed for all of the sites except this Odoo app.
custom_vhost.com.conf
upstream example{
server 127.0.0.1:8069 weight=1 fail_timeout=0;
}
upstream example-chat {
server 127.0.0.1:8072 weight=1 fail_timeout=0;
}
server {
listen [::]:80;
server_name delegates.example.com;
return 301 https://delegates.example.com$request_uri;
}
server {
listen [::]:80;
server_name vendors.example.com;
return 301 https://vendors.example.com$request_uri;
}
server {
listen [::]:80;
server_name example.com;
return 301 https://example.com;
}
server {
listen [::]:80;
server_name *.example.com;
return 301 https://example.com;
}
server {
listen [::]:443 ssl;
server_name pgadmin.example.com;
# well-known_start
location ^~ /.well-known {
add_header Host-Header 192fc2e7e50945beb8231a492d6a8024;
root /home/example/public_html;
}
# well-known_end
ssl_certificate /var/cpanel/ssl/apache_tls/*.example.com/combined;
ssl_certificate_key /var/cpanel/ssl/apache_tls/*.example.com/combined;
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
add_header X-Content-Type-Options nosniff;
add_header Cache-Control public;
location / {
deny all;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://127.0.0.1:5050;
}
}
server {
listen [::]:443 ssl;
server_name example.com www.example.com;
return 301 https://example.com;
}
server {
listen [::]:443 ssl http2;
server_name vendors.example.com delegates.example.com;
client_max_body_size 200m;
proxy_read_timeout 720s;
proxy_connect_timeout 720s;
proxy_send_timeout 720s;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-NginX-Proxy true;
#proxy_set_header X-Odoo-dbfilter ^%d\Z;
proxy_redirect off;
proxy_buffering off;
# well-known_start
location ^~ /.well-known {
add_header Host-Header 192fc2e7e50945beb8231a492d6a8024;
root /home/example/public_html;
}
# well-known_end
ssl_certificate /var/cpanel/ssl/apache_tls/*.example.com/combined;
ssl_certificate_key /var/cpanel/ssl/apache_tls/*.example.com/combined;
access_log /var/log/nginx/odoo.access.log;
error_log /var/log/nginx/odoo.error.log;
# adds gzip options
gzip on;
gzip_types text/css text/plain text/xml application/xml application/javascript application/x-javascript text/javascript application/json text/x-json;
gzip_proxied no-store no-cache private expired auth;
#gzip_min_length 1000;
gzip_disable "MSIE [1-6]\.";
location /longpolling {
proxy_pass http://example-chat;
}
location ~* /web/static/ {
gzip_static on;
proxy_cache_valid 200 90m;
proxy_buffering on;
expires 864000;
add_header Cache-Control public;
proxy_pass http://example;
}
location / {
error_page 403 = https://example.com;
proxy_pass http://example;
proxy_redirect off;
gzip_static on;
}
# The above location becomes a duplicate of the previous default location - which in turn fails the validity of the configuration.
location ~* /web/content/ {
gzip_static on;
proxy_cache_valid 200 90m;
proxy_buffering on;
expires 864000;
add_header Cache-Control public;
proxy_pass http://example;
}
location /web/database/manager {
deny all;
error_page 403 https://example.com;
proxy_pass http://example;
}
}
Since the conf files are added in alphabetical order, and any conflicting or duplicate settings are ignored - I ended up changing the name of the file so that it's included before the other ones. Also made the file immutable with the following command:
chattr +ai 1_custom_vhost.com.conf
I'm quite sure this is not a graceful solution, but it does the job for now.

nginx proxy and backbone pushstate

I'm trying to set up nginx to work with my backbonejs application and api server.
The API server is external and being routed through https://website.com/api/...
Essentially, I want any non-matched urls to be routed to /index.html for the backbone app to handle.
I've tried using try_files, but that just overrides my API. I've tried setting up another location where I check if the request is a GET and also if it doesn't match register or login or api, but that also doesn't work. Here's my server so far:
server {
listen 80; ssl off;
listen 443 ssl;
server_name app.io;
ssl_certificate /etc/nginx/conf/ssl.crt;
ssl_certificate_key /etc/nginx/conf/app.key;
root /home/ubuntu/app/public;
access_log /var/log/nginx/app.access.log;
error_log /var/log/nginx/app.error.log;
index index.html;
location / {
if ($scheme = "http") {
rewrite ^ https://$http_host$request_uri? permanent;
}
}
location ~ ^/(api)|(auth).*$ {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://app.aws.af.cm;
}
location ~ ^(/(register)|(login)).*$ {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# GETs only
limit_except POST {
proxy_pass https://app.aws.af.cm;
}
}
location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
expires max;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
}
Currently, try_files overrides the API and just redirects to index.html. Any idea how I can get everything to play nicely with one another?
Here's what I want:
if / - /index.html
else if /api/*|/auth/* - external proxy
else if /login|/register - POST - external proxy
else /* - /#$1
Figured it out:
Add try_files #uri #rewrites; to Location / and also add the #rewrites function below.
server {
listen 80; ssl off;
listen 443 ssl;
server_name app.io;
ssl_certificate /opt/nginx/conf/ssl.crt;
ssl_certificate_key /opt/nginx/conf/app.key;
root /home/ubuntu/app/public;
access_log /var/log/nginx/app.access.log;
error_log /var/log/nginx/app.error.log;
index index.html;
location / {
if ($scheme = "http") {
rewrite ^ https://$http_host$request_uri? permanent;
}
try_files $uri #rewrites;
}
location ~ ^/(api)|(auth)|(logout)|(register)|(login).*$ {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://app.cm;
}
location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
expires max;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
location #rewrites {
rewrite ^/.+ /#$uri redirect;
}
}

Resources