Nginx empty Accept-Encoding leads to timeout - nginx

When I want to replace a string in a Nginx reverse proxy I use
sub_filter 'https://upstream.com' 'http://www.localhost:8080';
sub_filter_once off;
But it doesn't work (because its gzipped) so I add
proxy_set_header Accept-Encoding "";
But then the request times out (gateway timeout). I wonder how this could be possible, I tried to do a GET to upstream.com with postman with Accept-Encoding header set to empty and it worked fine, so it is an issue on my side.
My config looks like:
worker_processes 1;
events {
worker_connections 1024;
}
http {
server {
listen 8080;
server_name localhost;
location / {
proxy_pass https://www.upstream.com;
proxy_hide_header 'x-frame-options';
proxy_cookie_domain ~^(.*)$ "http://www.localhost:8080";
proxy_set_header X-Real-IP $remote_addr;
proxy_cookie_path / "/; secure; HttpOnly; SameSite=none";
sub_filter 'https://www.upstream.com' 'http://www.localhost:8080';
sub_filter_once off;
sub_filter_types text/html;
proxy_set_header Accept-Encoding "";
}
}
}
PS: upstream.com is just an example, I use another URL.

Related

Conditionally rewrite request url based on referer - Nginx

What I am trying to achive; is when the request coming from http://<ip>/vault/ui/ (referer in the request header) and it includes the http://<ip>/v1/* endpoint, to be rewriten or redirected to http://<ip>/vault/v1/
can someone please help me to solve this issue?
/etc/nginx/sites-enabled/reverse-proxy.conf
upstream command_center_vault {
server command-center-0.blinchik.io:28200;
}
server {
listen 80;
listen [::]:80;
location /vault/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Accept-Encoding "";
proxy_pass "http://command_center_vault/vault/";
proxy_redirect /ui/ /vault/ui/;
}
location /vault/v1/ {
proxy_pass "http://command_center_vault/v1/";
}
}
Headers
Update
A little bit more context, the overarching architecture looks as in the picture below.
the configuration of nginx server in the private subnet looks like this:
private subnet nginx
upstream consul_server {
server brain-consul-server-0.blinchik.io:8500;
server brain-consul-server-1.blinchik.io:8500;
server brain-consul-server-2.blinchik.io:8500;
}
upstream vault_server {
server brain-vault-server-0.blinchik.io:8200;
server brain-vault-server-1.blinchik.io:8200;
}
server {
listen 28500;
listen [::]:28500;
location /consul/ {
proxy_pass "http://consul_server";
sub_filter_once off;
sub_filter_types application/javascript text/html;
sub_filter "/v1/" "/consul_v1/";
}
location /consul_v1/ {
proxy_pass "http://consul_server/v1/";
}
}
server {
listen 28200;
listen [::]:28200;
location /vault/ {
proxy_pass "http://vault_server/";
port_in_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Accept-Encoding "";
proxy_redirect /ui/ /vault/ui/;
sub_filter_once off;
sub_filter '<head>' '<head><base href="/vault/">';
sub_filter '"/ui/' '"ui/';
#inspired by this repo https://github.com/Folcky/hashicorp-vault-and-nginx
}
location /v1/ {
proxy_pass "http://vault_server/v1/";
}
}
public subnet nginx
upstream command_center_vault {
server command-center-0.blinchik.io:28200;
}
server {
listen 80;
listen [::]:80;
location /vault/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Accept-Encoding "";
proxy_pass "http://command_center_vault/vault/";
proxy_redirect /ui/ /vault/ui/;
}
location /vault/v1/ {
proxy_pass "http://command_center_vault/v1/";
}
}
the consul part works fine. if I change in the public subnet configuration the location of /vault/v1/ to /v1/ then it works as well. But the problem that other products that I intend to add it to the reverse proxy (like Nomad) also uses the /v1/ path and in this case there will be a conflict.
I think this one should work (must be placed at the server context outside any locations:
if ($http_referer ~ /vault/ui) {
rewrite ^/v1(/.*) /vault/v1$1 last;
}
You can make regex pattern more strict including //<ip> or https?://<ip> parts.

Nginx Content disposion - Trim file name

I will briefly tell my problem, I need to trim the first 14 characters (0-9 and -), when downloading a file, it is just about header Content-Disposition, how to achieve something like that?
I want the file to look like this:
1235467890123-FileName.txt
for such
FileName.txt
Config file:
upstream oxide_io {
ip_hash;
server 127.0.0.1:xxx;
server 127.0.0.1:xxx;
server 127.0.0.1:xxx;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name oxidepolska.pl;
error_log /var/log/nginx/forum-error.log error;
ssl ***
proxy_hide_header X-Powered-By;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
gzip ***
location #oxide {
proxy_pass http://oxide_io;
}
location ~ ^/assets/(.*) {
root /var/www/forum/;
try_files /build/public/$1 /public/$1 #oxide;
}
location /plugins/ {
root /var/www/forum/build/public/;
try_files $uri #oxide;
}
location ~ ^/public/uploads/files/(.*)$ {
root /var/www/forum/public/uploads/files/;
add_header Content-Disposition 'inline; filename="$1"';
}
location / {
proxy_pass http://oxide_io;
}
error_page 502 503 /503.html;
location = /503.html {
root /var/www/forum/public/;
}
}
server {
if ($host = oxidepolska.pl) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name oxidepolska.pl;
}
As I finally understand, you are trying to get rid of timestamp prefix added to files uploaded on some forum? Here is the another config, manually adding Content-Disposition header:
map $uri $content_disposition {
'~/\d{13}-([^/]+)$' 'attachment; filename="$1"';
}
server {
...
location <uploaded_files_location> {
add_header Content-Disposition $content_disposition;
}
...
}
When requested file is not matching the pattern \d{13}- (13 digits and "-" sign before the rest of filename), $content_disposition variable evaluated as empty string, so Content-Disposition header is not added to response.
➜ ~ curl -I https://ddl.oxidepolska.pl/1544820059040-lootconfig.zip
HTTP/2 200
server: nginx
date: Fri, 14 Dec 2018 21:54:56 GMT
content-type: application/zip
content-length: 7138
last-modified: Fri, 14 Dec 2018 20:40:59 GMT
etag: "5c14155b-1be2"
accept-ranges: bytes
Disclaimer: This answer is about modifying existing Content-Disposition header, which is not what required by OP.
map $upstream_http_content_disposition $content_disposition {
default $upstream_http_content_disposition;
'~^(.*filename="?)[-\d]{14}(.+?)("?$)' $1$2$3;
}
server {
...
location #oxide {
proxy_pass http://oxide_io;
proxy_hide_header Content-Disposition;
add_header Content-Disposition $content_disposition;
}
...
location / {
proxy_pass http://oxide_io;
proxy_hide_header Content-Disposition;
add_header Content-Disposition $content_disposition;
}
...
}
(assuming first 14 characters can be only 0-9 and '-')

nginx proxy pass content range

How can I make nginx to send source server if Range header is passed by user?
Currently I am tried this, but not worked:
server {
location / {
if ($http_range) {
set $var_arg_range $http_range;
}
if ($arg_range) {
set $var_arg_range "bytes=$arg_range";
}
proxy_set_header Range $var_arg_range;
proxy_pass https://content-na.drive.amazonaws.com;
proxy_set_header If-Range "";
proxy_set_header Host content-na.drive.amazonaws.com;
proxy_set_header Range $var_arg_range;
proxy_set_header Accept-Encoding "";
}
}
I need to make html5 videos streamable.
Finally I found it. I need to pass headers to source server with proxy_pass_request_headers. And don't forget to pass your custom Referer header:
server {
postpone_output 0;
resolver 8.8.8.8;
proxy_set_header Referer "https://content-na.drive.amazonaws.com";
proxy_set_header Host "content-na.drive.amazonaws.com";
proxy_pass_request_headers on;
proxy_ssl_verify off;
proxy_method "GET";
proxy_pass https://content-na.drive.amazonaws.com;
}

http_sub_module / sub_filter of nginx and reverse proxy not working

I am trying to reverse proxy my website and modify the content.
To do so, I compiled nginx with sub_filter.
It now accepts the sub_filter directive, but it does not work somehow.
server {
listen 8080;
server_name www.xxx.com;
access_log /var/log/nginx/www.goparts.access.log main;
error_log /var/log/nginx/www.goparts.error.log;
root /usr/share/nginx/html;
index index.html index.htm;
## send request back to apache1 ##
location / {
sub_filter <title> '<title>test</title>';
sub_filter_once on;
proxy_pass http://www.google.fr;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Please help me
Check if the upstream source has gzip turned on, if so you need
proxy_set_header Accept-Encoding "";
so the whole thing would be something like
location / {
proxy_set_header Accept-Encoding "";
proxy_pass http://upstream.site/;
sub_filter_types text/css;
sub_filter_once off;
sub_filter .upstream.site special.our.domain;
}
Check these links
https://www.ruby-forum.com/topic/178781
https://forum.nginx.org/read.php?2,226323,226323
http://www.serverphorums.com/read.php?5,542078

How do I fix this nginx configuration with almost-duplicate proxied locations?

I have the following nginx configuration for one of my virtual servers:
upstream app_example_https {
server 127.0.0.1:1340;
}
proxy_cache_path /Users/jaanus/dev/nginxcache levels=1:2 keys_zone=S3CACHE:10m;
proxy_cache_key "$scheme$request_method$host$request_uri";
server {
listen 0.0.0.0:1338;
server_name localhost;
ssl on;
ssl_certificate /Users/jaanus/dev/devHttpsCert.pem;
ssl_certificate_key /Users/jaanus/dev/devHttpsKey.pem;
location = / {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host 'something.s3-website-us-east-1.amazonaws.com';
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_cache S3CACHE;
proxy_cache_valid any 60m;
add_header X-Cached $upstream_cache_status;
proxy_pass http://something.s3-website-us-east-1.amazonaws.com/;
}
location /static/ {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host 'something.s3-website-us-east-1.amazonaws.com';
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_cache S3CACHE;
proxy_cache_valid any 60m;
add_header X-Cached $upstream_cache_status;
proxy_pass http://something.s3-website-us-east-1.amazonaws.com/static/;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://app_example_https/;
proxy_redirect off;
}
}
What this does in English:
There’s an nginx frontend which serves requests either from a static Amazon S3 site, or an application server.
All requests to / (site root) and /static are reverse-proxied from Amazon S3. All other requests are reverse-proxied from the application server.
Now, the problem: there are two almost identical Location blocks for the S3. This was the only way how I could make this configuration work, where two specific folders (root and /static) are served from S3, and everything else goes to the application server.
Two almost-identical blocks look dumb and are not scalable. When I add such folders, I don’t want to keep duplicating the blocks.
How do I merge the two locations into one Location block, while keeping everything working the same way?
You could put repeating part into external file and include it.
amazon.inc
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host 'something.s3-website-us-east-1.amazonaws.com';
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_cache S3CACHE;
proxy_cache_valid any 60m;
add_header X-Cached $upstream_cache_status;
proxy_pass http://something.s3-website-us-east-1.amazonaws.com;
your config
location = / {
include amazon.inc;
}
location /static/ {
include amazon.inc;
}
location / {
# proxy to you app
}
If you prefer to keep all in one file, you could use this trick:
error_page 470 = #amazon;
location = / {
return 470;
}
location /static/ {
return 470;
}
location #amazon {
# proxy to amazon
}
You could use regexp to merge several locations together, but I would not recommend to do that because it's hard to read and understand and is less efficient than simple prefix locations. But, just as an example:
# NOT RECOMMENDED
location ~ ^/($|static/) {
# proxy to amazon
}

Resources