How to get nginx to set Server header to that of the upstream? - nginx

I have a very specific Server: header set upstream, however, when Nginx is a reverse proxy, it sets its own Server header . The Server header is set to Server: nginx/1.23.1, for example. The the upstream server header is dynamic and can change every request (ex: Server: gunicorn/19.4.5 or Server: gunicorn/20.0.4). Is there a way to pass the upstream's Server header to nginx so that it is set how the upstream sent it? I know there is more_set_headers but that sets the headers to a static value. I need them to be dynamic specifically, based on how the upstream proxy_pass is setting them.
sample config:
http {
log_format custom '{"http_ssl_ja3": "$http_ssl_ja3", "http_ssl_ja3_hash": "$http_ssl_ja3_hash", "remote_addr": "$remote_addr", "request": "$request"}';
server {
proxy_busy_buffers_size 512k;
proxy_buffers 4 512k;
proxy_buffer_size 256k
listen 0.0.0.0:8443 ssl;
ssl_protocols TLSv1.3 TLSv1.1 TLSv1.2;
ssl_dhparam /etc/nginx/dhparam.pem;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384';
access_log /dev/stdout custom;
ssl_certificate_key "redacted";
ssl_certificate "redacted";
location = / {
proxy_pass http://localhost:8080;
}
}
}

I figured it out. You can pass through specific headers by specifying proxy_pass_header Server;. The correct headers are now being proxied.

Related

Need help in simulating (and blocking) HTTP_HOST spoofing attacks

I have an nginx reverse proxy serving multiple small web services. Each of the servers has different domain names, and are individually protected with SSL using Certbot. The installation for these was pretty standard as provided by Ubuntu 20.04.
I have a default server block to catch requests and return a 444 where the hostname does not match one of my server names. However about 3-5 times per day, I have a request getting through to my first server (happens to be Django), which then throws the "Not in ALLOWED_HOSTS" message. Since this is the first server block, I'm assuming something in my ruleset doesn't match any of the blocks and the request is sent upstream to serverA
Since the failure is rare, and in order to simulate this HOST_NAME spoofing attack, I have tried to use curl as well as using netcat with raw text files to try and mimic this situation, but I am not able to get past my nginx, i.e. I get a 444 back as expected.
Can you help me 1) simulate an attack with the right tools and 2) Help identify how to fix it? I'm assuming since this is reaching my server, it is coming over https?
My sanitized sudo nginx -T, and an example of an attack are shown below.
ubuntu#ip-A.B.C.D:/etc/nginx/conf.d$ sudo nginx -T
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# configuration file /etc/nginx/nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# SSL Settings
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
# Logging Settings
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Gzip Settings
gzip on;
# Virtual Host Configs
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
# configuration file /etc/nginx/modules-enabled/50-mod-http-image-filter.conf:
load_module modules/ngx_http_image_filter_module.so;
# configuration file /etc/nginx/modules-enabled/50-mod-http-xslt-filter.conf:
load_module modules/ngx_http_xslt_filter_module.so;
# configuration file /etc/nginx/modules-enabled/50-mod-mail.conf:
load_module modules/ngx_mail_module.so;
# configuration file /etc/nginx/modules-enabled/50-mod-stream.conf:
load_module modules/ngx_stream_module.so;
# configuration file /etc/nginx/mime.types:
types {
text/html html htm shtml;
text/css css;
# Many more here.. removed to shorten list
video/x-msvideo avi;
}
# configuration file /etc/nginx/conf.d/serverA.conf:
upstream serverA {
server 127.0.0.1:8000;
keepalive 256;
}
server {
server_name serverA.com www.serverA.com;
client_max_body_size 10M;
location / {
proxy_pass http://serverA;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
listen 443 ssl; # managed by Certbot
ssl_certificate ...; # managed by Certbot
ssl_certificate_key ...; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = serverA.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = www.serverA.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name serverA.com www.serverA.com;
return 404; # managed by Certbot
}
# configuration file /etc/letsencrypt/options-ssl-nginx.conf:
# This file contains important security parameters. If you modify this file
# manually, Certbot will be unable to automatically provide future security
# updates. Instead, Certbot will print and log an error message with a path to
# the up-to-date file that you will need to refer to when manually updating
# this file.
ssl_session_cache shared:le_nginx_SSL:10m;
ssl_session_timeout 1440m;
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA";
# configuration file /etc/nginx/conf.d/serverB.conf:
upstream serverB {
server 127.0.0.1:8002;
keepalive 256;
}
server {
server_name serverB.com fsn.serverB.com www.serverB.com;
client_max_body_size 10M;
location / {
proxy_pass http://serverB;
... as above ...
}
listen 443 ssl; # managed by Certbot
... as above ...
}
server {
if ($host = serverB.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = www.serverB.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = fsn.serverB.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name serverB.com fsn.serverB.com www.serverB.com;
listen 80;
return 404; # managed by Certbot
}
# Another similar serverC, serverD etc.
# Default server configuration
#
server {
listen 80 default_server;
listen [::]:80 default_server;
# server_name "";
return 444;
}
Request data from a request that successfully gets past nginx to reach serverA (Django), where it throws an error: (Note that the path will 404, and HTTP_HOST headers are not my server names. More often, the HTTP_HOST comes in with my static IP address as well.
Exception Type: DisallowedHost at /movie/bCZgaGBj
Exception Value: Invalid HTTP_HOST header: 'www.tvmao.com'. You may need to add 'www.tvmao.com' to ALLOWED_HOSTS.
Request information:
USER: [unable to retrieve the current user]
GET: No GET data
POST: No POST data
FILES: No FILES data
COOKIES: No cookie data
META:
HTTP_ACCEPT = '*/*'
HTTP_ACCEPT_LANGUAGE = 'zh-cn'
HTTP_CACHE_CONTROL = 'no-cache'
HTTP_CONNECTION = 'Upgrade'
HTTP_HOST = 'www.tvmao.com'
HTTP_REFERER = '/movie/bCZgaGBj'
HTTP_USER_AGENT = 'Mozilla/5.0 (iPhone; CPU iPhone OS 13_2_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.0.3 Mobile/15E148 Safari/604.1'
HTTP_X_FORWARDED_FOR = '27.124.12.23'
HTTP_X_REAL_IP = '27.124.12.23'
PATH_INFO = '/movie/bCZgaGBj'
QUERY_STRING = ''
REMOTE_ADDR = '127.0.0.1'
REMOTE_HOST = '127.0.0.1'
REMOTE_PORT = 44058
REQUEST_METHOD = 'GET'
SCRIPT_NAME = ''
SERVER_NAME = '127.0.0.1'
SERVER_PORT = '8000'
wsgi.multiprocess = True
wsgi.multithread = True
Here's how I've tried to simulate the attack using raw http requests and netcat:
me#linuxmachine:~$ cat raw.http
GET /dashboard/ HTTP/1.1
Host: serverA.com
Host: test.com
Connection: close
me#linuxmachine:~$ cat raw.http | nc A.B.C.D 80
HTTP/1.1 400 Bad Request
Server: nginx/1.18.0 (Ubuntu)
Date: Fri, 27 Jan 2023 15:05:13 GMT
Content-Type: text/html
Content-Length: 166
Connection: close
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx/1.18.0 (Ubuntu)</center>
</body>
</html>
If I send my correct serverA.com as the host header, I get a 301 (redirecting to https).
If I send an incorrect host header (e.g. test.com) I get an empty response (expected).
If I send two host headers (correct and incorrect) I get a 400 bad request
If I send the correct host, but to port 443, I get a 400 plain HTTP sent to HTTPS port...
How do I simulate a request to get past nginx to my upstream serverA like the bots do? And how do I block it with nginx?
Thanks!
There is something magical about asking SO. The process of writing makes the answer appear :)
To my first question above, of simulating the spoof, I was able to just use curl in the following way:
me#linuxmachine:~$ curl -H "Host: A.B.C.D" https://example.com
I'm pretty sure I've tried this before but not sure why I didn't try this exact spell (perhaps I was sending a different header, like Http-Host: or something)
With this call, I was able to trigger the error as before, which made it easy to test the nginx configuration and answer the second question.
It was clear that the spoof was coming on 443, which led me to this very informative post on StackExchange
This also explained why we can't just listen 443 and respond with a 444 without first having traded SSL certificates due to the way SSL works.
The three options suggested (happrox, fake cert, and the if($host ...) directive might all work, but the simplest I think is the last one. Since this if( ) is not within the location context, I believe this to be ok.
My new serverA block looks like this:
server {
server_name serverA.com www.serverA.com;
client_max_body_size 10M;
## This fixes it
if ( $http_host !~* ^(serverA\.com|www\.serverA\.com)$ ) {
return 444;
}
## and it's not inside the location context...
location / {
proxy_pass http://upstream;
proxy_http_version 1.1;
...

HTTP forwarding Plaintext warning (nginx)

So I was updating my nginx configuration to meet some security checks, to get grade A+ from Qualys SSL Labs.
I do get A+, but one of the warnings I don't understand. I get this:
Though, I do have http redirect to https and it seems its working fine. Does anyone know what would be the cause of this warning?
I found this question: HTTP forwarding PLAINTEXT warning but it talks about apache.
I also found this: https://github.com/ssllabs/ssllabs-scan/issues/154 so it was mentioned that it could be just a bug, but now its not clear (that issue is old).
My nginx configuration:
http {
upstream odoo-upstream {
server odoo:8069 weight=1 fail_timeout=0;
}
upstream odoo-im-upstream {
server odoo:8072 weight=1 fail_timeout=0;
}
# Basic Settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Enable SSL session caching for improved performance
# http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache
ssl_session_cache shared:ssl_session_cache:5m;
ssl_session_timeout 24h; # time which sessions can be re-used.
# Because the proper rotation of session ticket encryption key is
# not yet implemented in Nginx, we should turn this off for now.
ssl_session_tickets off;
# Default size is 16k, reducing it can slightly improve performance.
ssl_buffer_size 8k;
# Gzip Settings
gzip on;
# http redirects to https
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
charset utf-8;
server {
# server port and name
listen 443 ssl http2;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options sameorigin;
add_header X-Content-Type-Options nosniff;
add_header X-Xss-Protection "1; mode=block";
# Specifies the maximum accepted body size of a client request,
# as indicated by the request header Content-Length.
client_max_body_size 200m;
# add ssl specific settings
keepalive_timeout 60;
ssl_certificate /etc/ssl/nginx/domain.bundle.crt;
ssl_certificate_key /etc/ssl/nginx/domain.key;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.4.4 8.8.8.8;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
# increase proxy buffer to handle some Odoo web requests
proxy_buffers 16 64k;
proxy_buffer_size 128k;
#general proxy settings
# force timeouts if the backend dies
proxy_connect_timeout 600s;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
# set headers
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
# Let the Odoo web service know that we’re using HTTPS, otherwise
# it will generate URL using http:// and not https://
proxy_set_header X-Forwarded-Proto $scheme;
# by default, do not forward anything
proxy_redirect off;
proxy_buffering off;
location / {
proxy_pass http://odoo-upstream;
}
location /longpolling {
proxy_pass http://odoo-im-upstream;
}
# cache some static data in memory for 60mins.
# under heavy load this should relieve stress on the Odoo web interface a bit.
location /web/static/ {
proxy_cache_valid 200 60m;
proxy_buffering on;
expires 864000;
proxy_pass http://odoo-upstream;
}
include /etc/nginx/custom_error_page.conf;
}
include /etc/nginx/conf.d/*.conf;
}
events {
worker_connections 1024;
}

Nginx reverse proxy net::ERR_HTTP2_PROTOCOL_ERROR

I have a Java (Micronaut) + Vue.js application that is running on port 8081. The app is accessed through a nginx reverse proxy that also uses a SSL certificate from Letsencrypt. Everything seems to work fine except for file uploads in the app. If a small file is being uploaded maybe < 1MB then everything works fine. If a larger file is being uploaded then the file upload request fails and in Chrome console net::ERR_HTTP2_PROTOCOL_ERROR is shown. If I send the large file upload request with some tool like Postman then the response status is shown to bee 200 OK, but the file has still not been uploaded and the response sent back from the server seems to be partial.
If I skip the nginx proxy and access the API on port 8081 directly then also the larger files can be uploaded.
Nginx error log show that the upload request timed out.
2021/06/07 20:45:20 [error] 32801#32801: *21 upstream timed out (110: Connection timed out) while reading upstream, client: XXX, server: XXX, request: "POST XXX HTTP/2.0", upstream: "XXX", host: "XXX", referrer: "XXX"
I have similar setups with nginx working for other apps and there all file uploads are working as expected. But in this case I am not able to figure out why the net::ERR_HTTP2_PROTOCOL_ERROR occurs. I have tried many suggestions that I could find from the internet but none seem to work in this case.
I have verified that there is enough space on the server to upload the files. Setting proxy_max_temp_file_size 0; as suggested here did not have any effect. Increasing http2_max_field_size and http2_max_header_size or large_client_header_buffers as suggested here did not work.
My global nginx configuration looks like this:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Nginx configuration for the specific host looks like this:
server {
server_name XXX;
client_max_body_size 100M;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://XXX:8081;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/XXX/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/XXX/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = XXX) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = XXX) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name XXX;
return 404; # managed by Certbot
}

trying to set nginx with https configuration returns Invalid Host header

I'm trying to configure nginx with https so i can browse into it (www.oidctest.com -- (configured to localhost at hosts file) and it will redirect to my application resides at: www.oidctest.com:3000
my nginx configuration is:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name www.oidctest.com;
return 301 http://$server_name:3000$request_uri;
# certs sent to the client in SERVER HELLO are concatenated in
ssl_certificate
ssl_certificate /nginx-1.14.1/conf/server.crt;
ssl_certificate_key /nginx-1.14.1/conf/server.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
# Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
#ssl_dhparam /path/to/dhparam.pem;
# intermediate configuration. tweak to your needs.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE....HA:!DSS';
ssl_prefer_server_ciphers on;
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
#add_header Strict-Transport-Security max-age=15768000;
# OCSP Stapling ---
# fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;
## verify chain of trust of OCSP response using Root CA and Intermediate certs
#ssl_trusted_certificate /path/to/root_CA_cert_plus_intermediates;
resolver 8.8.8.8;
location / {
proxy_pass http://localhost:3000;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
}
but when i'm browsing into "www.oidctest.com" i get: "Invalid Host header"
can u suggest for solution?
in the nginx.conf, I would replace
proxy_pass http://localhost:3000;
With
proxy_pass http://127.0.0.1:3000;
But more importantly, the error is related to you ReactJS app (I guess it is one of those?). You can add allowedHosts under devServer in your webpack.config.js:
devServer: {
compress: true,
inline: true,
port: '8080',
allowedHosts: [
'.amazonaws.com'
]
},

NGINX: too many redirects

I'm hosting a meteor app behind an nginx proxy. We were using https (and thus 301 redirecting all http connections to 443). However, we've now gone back to http (and thus 301 redirecting all https connections to 80).
When I try to visit my site, I get an error saying, "The page isn’t redirecting properly". However, if I visit in incognito or after clearing my browser cache and cookies, everything works again.
Can I change anything in my nginx conf file (below) to fix this? I really don't want all of my visitors to have to clear their browsing data. Thanks!
server_tokens off; # for security-by-obscurity: stop displaying nginx version
# this section is needed to proxy web-socket connections
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# HTTP
server {
listen 80;
server_name [REDACTED];
# redirect to meteor
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; # allow websockets
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr; # preserve client IP
}
}
# HTTPS server
server {
listen 443 ssl spdy; # we enable SPDY here
server_name [REDACTED];
root html; # irrelevant
index index.html; # irrelevant
# redirect to http
return 301 http://$host$request_uri;
ssl_certificate /etc/[REDACTED].pem;
ssl_certificate_key /etc/[REDACTED].pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers '[REDACTED]'
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
# If your application is not compatible with IE <= 10, this will redirect visitors to a page advising a browser update
# This works because IE 11 does not present itself as MSIE anymore
if ($http_user_agent ~ "MSIE" ) {
return 303 https://browser-update.org/update.html;
}
location ~ /.well-known {
allow all;
}
}

Resources