opennssl s_client shows wrong cert: browser shows correct - nginx

We have 2 domains pointing to the same public AWS ELB and behind that ELB we have nginx, which will redirect requests to the correct server.
When we hit sub.domainA.com in the Browser (Chrome/Safari/etc), everything works, but when we use tools like openssl, we get a cert error:
openssl s_client -host sub.domainA.com -port 443 -prexit -showcerts
CONNECTED(00000003)
depth=0 /OU=Domain Control Validated/CN=*.domainB.com
verify error:num=20:unable to get local issuer certificate
verify return:1
For some reason, domainA is using domainB certs and I have no idea why.
I'm almost 100% sure the issue is with our nginx config (and more specifically, not having a default server block)
Here is our nginx config:
worker_processes auto;
error_log /var/log/nginx/error.log;
error_log /var/log/nginx/error.log warn;
error_log /var/log/nginx/error.log notice;
error_log /var/log/nginx/error.log info;
events {
worker_connections 1024;
}
http {
include /usr/local/openresty/nginx/conf/mime.types;
default_type application/octet-stream;
...
#
# DomainB
#
server {
ssl on;
ssl_certificate /etc/nginx/domainB.crt;
ssl_certificate_key /etc/nginx/domainB.key;
listen 8080;
server_name *.domainB.com;
access_log /var/log/nginx/access.log logstash_json;
error_page 497 301 =200 https://$host$request_uri;
set $target_web "web.domainB_internal.com:80";
location / {
keepalive_timeout 180;
resolver 10.10.0.2 valid=30s;
proxy_set_header Host $host;
proxy_pass http://$target_web;
proxy_set_header X-Unique-ID $request_id;
}
}
#
# DomainA
#
server {
ssl on;
ssl_certificate /etc/nginx/domainA.crt;
ssl_certificate_key /etc/nginx/domainA.key;
listen 8080;
server_name *.domainA.com;
access_log /var/log/nginx/access.log logstash_json;
error_page 497 301 =200 https://$host$request_uri;
set $target_web "web.domainA_internal.com:80";
location / {
keepalive_timeout 180;
resolver 10.10.0.2 valid=30s;
proxy_set_header Host $host;
proxy_pass http://$target_web;
proxy_set_header X-Unique-ID $request_id;
}
}
}
It shouldn't even be falling in the domainB block at all!
Yet we see it when using "openssl s_client", but not in the browser.
Any ideas why we see domainB at all when using "openssl s_client -host sub.domainA.com"?
Very similar to Openssl shows a different server certificate while browser shows correctly
Very helpful website: https://tech.mendix.com/linux/2014/10/29/nginx-certs-sni/

You need to specify the servername option in your openssl command.
From the openssl s_client docs:
-servername name
Set the TLS SNI (Server Name Indication) extension in the ClientHello
message.
So try something like
openssl s_client -connect sub.domainA.com:443 -showcerts -servername sub.domainA.com

Related

Need help in simulating (and blocking) HTTP_HOST spoofing attacks

I have an nginx reverse proxy serving multiple small web services. Each of the servers has different domain names, and are individually protected with SSL using Certbot. The installation for these was pretty standard as provided by Ubuntu 20.04.
I have a default server block to catch requests and return a 444 where the hostname does not match one of my server names. However about 3-5 times per day, I have a request getting through to my first server (happens to be Django), which then throws the "Not in ALLOWED_HOSTS" message. Since this is the first server block, I'm assuming something in my ruleset doesn't match any of the blocks and the request is sent upstream to serverA
Since the failure is rare, and in order to simulate this HOST_NAME spoofing attack, I have tried to use curl as well as using netcat with raw text files to try and mimic this situation, but I am not able to get past my nginx, i.e. I get a 444 back as expected.
Can you help me 1) simulate an attack with the right tools and 2) Help identify how to fix it? I'm assuming since this is reaching my server, it is coming over https?
My sanitized sudo nginx -T, and an example of an attack are shown below.
ubuntu#ip-A.B.C.D:/etc/nginx/conf.d$ sudo nginx -T
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# configuration file /etc/nginx/nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# SSL Settings
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
# Logging Settings
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Gzip Settings
gzip on;
# Virtual Host Configs
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
# configuration file /etc/nginx/modules-enabled/50-mod-http-image-filter.conf:
load_module modules/ngx_http_image_filter_module.so;
# configuration file /etc/nginx/modules-enabled/50-mod-http-xslt-filter.conf:
load_module modules/ngx_http_xslt_filter_module.so;
# configuration file /etc/nginx/modules-enabled/50-mod-mail.conf:
load_module modules/ngx_mail_module.so;
# configuration file /etc/nginx/modules-enabled/50-mod-stream.conf:
load_module modules/ngx_stream_module.so;
# configuration file /etc/nginx/mime.types:
types {
text/html html htm shtml;
text/css css;
# Many more here.. removed to shorten list
video/x-msvideo avi;
}
# configuration file /etc/nginx/conf.d/serverA.conf:
upstream serverA {
server 127.0.0.1:8000;
keepalive 256;
}
server {
server_name serverA.com www.serverA.com;
client_max_body_size 10M;
location / {
proxy_pass http://serverA;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
listen 443 ssl; # managed by Certbot
ssl_certificate ...; # managed by Certbot
ssl_certificate_key ...; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = serverA.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = www.serverA.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name serverA.com www.serverA.com;
return 404; # managed by Certbot
}
# configuration file /etc/letsencrypt/options-ssl-nginx.conf:
# This file contains important security parameters. If you modify this file
# manually, Certbot will be unable to automatically provide future security
# updates. Instead, Certbot will print and log an error message with a path to
# the up-to-date file that you will need to refer to when manually updating
# this file.
ssl_session_cache shared:le_nginx_SSL:10m;
ssl_session_timeout 1440m;
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA";
# configuration file /etc/nginx/conf.d/serverB.conf:
upstream serverB {
server 127.0.0.1:8002;
keepalive 256;
}
server {
server_name serverB.com fsn.serverB.com www.serverB.com;
client_max_body_size 10M;
location / {
proxy_pass http://serverB;
... as above ...
}
listen 443 ssl; # managed by Certbot
... as above ...
}
server {
if ($host = serverB.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = www.serverB.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = fsn.serverB.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name serverB.com fsn.serverB.com www.serverB.com;
listen 80;
return 404; # managed by Certbot
}
# Another similar serverC, serverD etc.
# Default server configuration
#
server {
listen 80 default_server;
listen [::]:80 default_server;
# server_name "";
return 444;
}
Request data from a request that successfully gets past nginx to reach serverA (Django), where it throws an error: (Note that the path will 404, and HTTP_HOST headers are not my server names. More often, the HTTP_HOST comes in with my static IP address as well.
Exception Type: DisallowedHost at /movie/bCZgaGBj
Exception Value: Invalid HTTP_HOST header: 'www.tvmao.com'. You may need to add 'www.tvmao.com' to ALLOWED_HOSTS.
Request information:
USER: [unable to retrieve the current user]
GET: No GET data
POST: No POST data
FILES: No FILES data
COOKIES: No cookie data
META:
HTTP_ACCEPT = '*/*'
HTTP_ACCEPT_LANGUAGE = 'zh-cn'
HTTP_CACHE_CONTROL = 'no-cache'
HTTP_CONNECTION = 'Upgrade'
HTTP_HOST = 'www.tvmao.com'
HTTP_REFERER = '/movie/bCZgaGBj'
HTTP_USER_AGENT = 'Mozilla/5.0 (iPhone; CPU iPhone OS 13_2_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.0.3 Mobile/15E148 Safari/604.1'
HTTP_X_FORWARDED_FOR = '27.124.12.23'
HTTP_X_REAL_IP = '27.124.12.23'
PATH_INFO = '/movie/bCZgaGBj'
QUERY_STRING = ''
REMOTE_ADDR = '127.0.0.1'
REMOTE_HOST = '127.0.0.1'
REMOTE_PORT = 44058
REQUEST_METHOD = 'GET'
SCRIPT_NAME = ''
SERVER_NAME = '127.0.0.1'
SERVER_PORT = '8000'
wsgi.multiprocess = True
wsgi.multithread = True
Here's how I've tried to simulate the attack using raw http requests and netcat:
me#linuxmachine:~$ cat raw.http
GET /dashboard/ HTTP/1.1
Host: serverA.com
Host: test.com
Connection: close
me#linuxmachine:~$ cat raw.http | nc A.B.C.D 80
HTTP/1.1 400 Bad Request
Server: nginx/1.18.0 (Ubuntu)
Date: Fri, 27 Jan 2023 15:05:13 GMT
Content-Type: text/html
Content-Length: 166
Connection: close
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx/1.18.0 (Ubuntu)</center>
</body>
</html>
If I send my correct serverA.com as the host header, I get a 301 (redirecting to https).
If I send an incorrect host header (e.g. test.com) I get an empty response (expected).
If I send two host headers (correct and incorrect) I get a 400 bad request
If I send the correct host, but to port 443, I get a 400 plain HTTP sent to HTTPS port...
How do I simulate a request to get past nginx to my upstream serverA like the bots do? And how do I block it with nginx?
Thanks!
There is something magical about asking SO. The process of writing makes the answer appear :)
To my first question above, of simulating the spoof, I was able to just use curl in the following way:
me#linuxmachine:~$ curl -H "Host: A.B.C.D" https://example.com
I'm pretty sure I've tried this before but not sure why I didn't try this exact spell (perhaps I was sending a different header, like Http-Host: or something)
With this call, I was able to trigger the error as before, which made it easy to test the nginx configuration and answer the second question.
It was clear that the spoof was coming on 443, which led me to this very informative post on StackExchange
This also explained why we can't just listen 443 and respond with a 444 without first having traded SSL certificates due to the way SSL works.
The three options suggested (happrox, fake cert, and the if($host ...) directive might all work, but the simplest I think is the last one. Since this if( ) is not within the location context, I believe this to be ok.
My new serverA block looks like this:
server {
server_name serverA.com www.serverA.com;
client_max_body_size 10M;
## This fixes it
if ( $http_host !~* ^(serverA\.com|www\.serverA\.com)$ ) {
return 444;
}
## and it's not inside the location context...
location / {
proxy_pass http://upstream;
proxy_http_version 1.1;
...

Nginx reverse proxy net::ERR_HTTP2_PROTOCOL_ERROR

I have a Java (Micronaut) + Vue.js application that is running on port 8081. The app is accessed through a nginx reverse proxy that also uses a SSL certificate from Letsencrypt. Everything seems to work fine except for file uploads in the app. If a small file is being uploaded maybe < 1MB then everything works fine. If a larger file is being uploaded then the file upload request fails and in Chrome console net::ERR_HTTP2_PROTOCOL_ERROR is shown. If I send the large file upload request with some tool like Postman then the response status is shown to bee 200 OK, but the file has still not been uploaded and the response sent back from the server seems to be partial.
If I skip the nginx proxy and access the API on port 8081 directly then also the larger files can be uploaded.
Nginx error log show that the upload request timed out.
2021/06/07 20:45:20 [error] 32801#32801: *21 upstream timed out (110: Connection timed out) while reading upstream, client: XXX, server: XXX, request: "POST XXX HTTP/2.0", upstream: "XXX", host: "XXX", referrer: "XXX"
I have similar setups with nginx working for other apps and there all file uploads are working as expected. But in this case I am not able to figure out why the net::ERR_HTTP2_PROTOCOL_ERROR occurs. I have tried many suggestions that I could find from the internet but none seem to work in this case.
I have verified that there is enough space on the server to upload the files. Setting proxy_max_temp_file_size 0; as suggested here did not have any effect. Increasing http2_max_field_size and http2_max_header_size or large_client_header_buffers as suggested here did not work.
My global nginx configuration looks like this:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Nginx configuration for the specific host looks like this:
server {
server_name XXX;
client_max_body_size 100M;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://XXX:8081;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/XXX/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/XXX/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = XXX) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = XXX) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name XXX;
return 404; # managed by Certbot
}

Nginx reverse proxy getting 400 error bad request

I'm trying to set up nginx as reverse proxy to an application.
When I set up the same request over http it works fine
I think I've done everything and I still have the 400 error. Any help will be really nice.
My nginx configuration file :
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
large_client_header_buffers 4 16k;
client_max_body_size 10M;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log debug;
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
My site configuration :
server {
listen 80;
server_name example.com;
location /eai {
proxy_pass http://192.168.44.128:8000;
}
}
server {
listen 443 ssl;
ssl_certificate /etc/nginx/certificates/myssl.crt;
ssl_certificate_key /etc/nginx/certificates/myssl.key;
server_name example.com;
location /eai {
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_pass http://192.168.44.128:8000;
}
}
My python code to call the application behind the proxy :
import requests
url = 'https://example.com/eai/request/import'
file_list = [
('file', ('test.csv', open('test.csv', 'rb'), 'text/html')),
]
r = requests.post(url, files=file_list, proxies={"https":"https://192.168.44.241","http":"http://192.168.44.241"}, verify=False)
The info line in the error.log
client sent invalid request while reading client request line, client: 192.168.44.1, server: example.com, request: "CONNECT example.com:443 HTTP/1.0"
Thanks in advance for any help
Regards
Here is your problem:
proxies={"https":"https://192.168.44.241","http":"http://192.168.44.241"}
Your client connection is not actually going through a proxy, so this should not be present at all. You are just making a normal HTTPS request to a normal HTTPS server.

Unable to reverse proxy requests to Nifi running in the backend using client cert as auth mechanism

I have configured Nginx as reverse proxy server for my Nifi Application running in the backend on port 9443;
**
Here goes my Nginx conf:
**
worker_processes 1;
events { worker_connections 1024; }
map_hash_bucket_size 128;
sendfile on;
large_client_header_buffers 4 64k;
upstream nifi {
server cloud-analytics-test2-nifi-a.insights.io:9443;
}
server {
listen 443 ssl;
#ssl on;
server_name nifi-test-nginx.insights.io;
ssl_certificate /etc/nginx/cert1.pem;
ssl_certificate_key /etc/nginx/privkey1.pem;
ssl_client_certificate /etc/nginx/nifi-client.pem; (this contains server cert and ca cert)
ssl_verify_client on;
ssl_verify_depth 2;
error_log /var/log/nginx/error.log debug;
proxy_ssl_certificate /etc/nginx/cert1.pem;
proxy_ssl_certificate_key /etc/nginx/privkey1.pem;
proxy_ssl_trusted_certificate /etc/nginx/nifi-client.pem;
location / {
proxy_pass https://nifi;
proxy_set_header X-ProxyScheme https;
proxy_set_header X-ProxyHost nifi-test-nginx.insights.io;
proxy_set_header X-ProxyPort 443;
proxy_set_header X-ProxyContextPath /;
proxy_set_header X-ProxiedEntitiesChain "<$ssl_client_s_dn>";
proxy_set_header X-SSL-CERT $ssl_client_escaped_cert;
}
}
}
When ever I try to access Nifi using Nginx Reverse Proxy Address/hostname I am getting below error.
client SSL certificate verify error: (2:unable to get issuer certificate) while reading client request headers,

Get the certificate and key file names stored on Heroku to set up SSL on Nginx server

I wanted to add the certificate and key to the Nginx server that my application is served on and hosted by Heroku. This is what I currently have in my Nginx config file. Does proxying the SSL server work for this instead and keeps the server secure? If not then how am I supposed to get file names for the .pem and .key files that I uploaded to Heroku for my specific application?
nginx.conf.erb
daemon off;
#Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
events {
use epoll;
accept_mutex on;
worker_connections <%= ENV['NGINX_WORKER_CONNECTIONS'] || 1024 %>;
}
http {
server_tokens off;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log <%= ENV['NGINX_ACCESS_LOG_PATH'] || 'logs/nginx/access.log' %> l2met;
error_log <%= ENV['NGINX_ERROR_LOG_PATH'] || 'logs/nginx/error.log' %>;
include mime.types;
default_type text/html;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
#Must read the body in 65 seconds.
keepalive_timeout 65;
# handle SNI
proxy_ssl_server_name on;
upstream app_server {
server unix:/tmp/nginx.socket fail_timeout=0;
}
server {
listen <%= ENV["PORT"] %>;
server_name _;
# Define the specified charset to the “Content-Type” response header field
charset utf-8;
location / {
proxy_ssl_name <%= ENV["HEROKU_DOMAIN"] %>;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
client_max_body_size 5M;
}
location /static {
alias /app/flask_app/static;
}
}
}
If you create SSL certificate from CloudFlare, you can't access through Heroku CLI, but you can download it through CloudFlare.
Please check if you have routed your domain web through Configure-Cloudflare-and-Heroku-over-HTTPS.
Download the SSL Cert via CloudFlare.
Setup SSL cert for Nginx Setup SSL Cert.
Hope it helps.
EDIT
Put SSL cert .key and .pem into same folder with nginx.conf.erb. I.e. domain_name.key & domain_name.pem
Deploy to Heroku.
Use config like this:
ssl_certificate domain_name.pem;
ssl_certificate_key domain_name.key;

Resources