i get crt file from a partner, i want to let nginx do ssl connection so i follow this note : https://docs.nginx.com/nginx/admin-guide/security-controls/securing-http-traffic-upstream/ i don't have client pem file and client key file. How can i generate this files with crt to fill this directives:
location /upstream { proxy_pass https://backend.example.com; proxy_ssl_certificate /etc/nginx/client.pem;
proxy_ssl_certificate_key /etc/nginx/client.key;
}
actually my location :
location /api {
access_log /var/log/nginx/api.log upstream_logging ;
proxy_ssl_trusted_certificate /etc/nginx/partner.crt;
# proxy_ssl_certificate_key /etc/nginx/client.key;
# proxy_ssl_certificate /etc/nginx/client.pem;
proxy_ssl_verify off;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
proxy_ssl_server_name on;
#proxy_ssl_protocols TLSv1 ;
proxy_pass https://api$uri$is_args$args;
}
with this setting i get this error:
SSL_do_handshake() failed (SSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol) while SSL handshaking to upstream
how to get client.key? is this generated from crt file?
You do not need a client key.
Error SSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protoco probably indicates that the protocol was not chosen correctly in proxy_ssl_protocols
Official documentation nginx.org proxy_ssl_protocols
Related
I have an nginx reverse proxy serving multiple small web services. Each of the servers has different domain names, and are individually protected with SSL using Certbot. The installation for these was pretty standard as provided by Ubuntu 20.04.
I have a default server block to catch requests and return a 444 where the hostname does not match one of my server names. However about 3-5 times per day, I have a request getting through to my first server (happens to be Django), which then throws the "Not in ALLOWED_HOSTS" message. Since this is the first server block, I'm assuming something in my ruleset doesn't match any of the blocks and the request is sent upstream to serverA
Since the failure is rare, and in order to simulate this HOST_NAME spoofing attack, I have tried to use curl as well as using netcat with raw text files to try and mimic this situation, but I am not able to get past my nginx, i.e. I get a 444 back as expected.
Can you help me 1) simulate an attack with the right tools and 2) Help identify how to fix it? I'm assuming since this is reaching my server, it is coming over https?
My sanitized sudo nginx -T, and an example of an attack are shown below.
ubuntu#ip-A.B.C.D:/etc/nginx/conf.d$ sudo nginx -T
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# configuration file /etc/nginx/nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# SSL Settings
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
# Logging Settings
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Gzip Settings
gzip on;
# Virtual Host Configs
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
# configuration file /etc/nginx/modules-enabled/50-mod-http-image-filter.conf:
load_module modules/ngx_http_image_filter_module.so;
# configuration file /etc/nginx/modules-enabled/50-mod-http-xslt-filter.conf:
load_module modules/ngx_http_xslt_filter_module.so;
# configuration file /etc/nginx/modules-enabled/50-mod-mail.conf:
load_module modules/ngx_mail_module.so;
# configuration file /etc/nginx/modules-enabled/50-mod-stream.conf:
load_module modules/ngx_stream_module.so;
# configuration file /etc/nginx/mime.types:
types {
text/html html htm shtml;
text/css css;
# Many more here.. removed to shorten list
video/x-msvideo avi;
}
# configuration file /etc/nginx/conf.d/serverA.conf:
upstream serverA {
server 127.0.0.1:8000;
keepalive 256;
}
server {
server_name serverA.com www.serverA.com;
client_max_body_size 10M;
location / {
proxy_pass http://serverA;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
listen 443 ssl; # managed by Certbot
ssl_certificate ...; # managed by Certbot
ssl_certificate_key ...; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = serverA.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = www.serverA.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name serverA.com www.serverA.com;
return 404; # managed by Certbot
}
# configuration file /etc/letsencrypt/options-ssl-nginx.conf:
# This file contains important security parameters. If you modify this file
# manually, Certbot will be unable to automatically provide future security
# updates. Instead, Certbot will print and log an error message with a path to
# the up-to-date file that you will need to refer to when manually updating
# this file.
ssl_session_cache shared:le_nginx_SSL:10m;
ssl_session_timeout 1440m;
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA";
# configuration file /etc/nginx/conf.d/serverB.conf:
upstream serverB {
server 127.0.0.1:8002;
keepalive 256;
}
server {
server_name serverB.com fsn.serverB.com www.serverB.com;
client_max_body_size 10M;
location / {
proxy_pass http://serverB;
... as above ...
}
listen 443 ssl; # managed by Certbot
... as above ...
}
server {
if ($host = serverB.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = www.serverB.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = fsn.serverB.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name serverB.com fsn.serverB.com www.serverB.com;
listen 80;
return 404; # managed by Certbot
}
# Another similar serverC, serverD etc.
# Default server configuration
#
server {
listen 80 default_server;
listen [::]:80 default_server;
# server_name "";
return 444;
}
Request data from a request that successfully gets past nginx to reach serverA (Django), where it throws an error: (Note that the path will 404, and HTTP_HOST headers are not my server names. More often, the HTTP_HOST comes in with my static IP address as well.
Exception Type: DisallowedHost at /movie/bCZgaGBj
Exception Value: Invalid HTTP_HOST header: 'www.tvmao.com'. You may need to add 'www.tvmao.com' to ALLOWED_HOSTS.
Request information:
USER: [unable to retrieve the current user]
GET: No GET data
POST: No POST data
FILES: No FILES data
COOKIES: No cookie data
META:
HTTP_ACCEPT = '*/*'
HTTP_ACCEPT_LANGUAGE = 'zh-cn'
HTTP_CACHE_CONTROL = 'no-cache'
HTTP_CONNECTION = 'Upgrade'
HTTP_HOST = 'www.tvmao.com'
HTTP_REFERER = '/movie/bCZgaGBj'
HTTP_USER_AGENT = 'Mozilla/5.0 (iPhone; CPU iPhone OS 13_2_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.0.3 Mobile/15E148 Safari/604.1'
HTTP_X_FORWARDED_FOR = '27.124.12.23'
HTTP_X_REAL_IP = '27.124.12.23'
PATH_INFO = '/movie/bCZgaGBj'
QUERY_STRING = ''
REMOTE_ADDR = '127.0.0.1'
REMOTE_HOST = '127.0.0.1'
REMOTE_PORT = 44058
REQUEST_METHOD = 'GET'
SCRIPT_NAME = ''
SERVER_NAME = '127.0.0.1'
SERVER_PORT = '8000'
wsgi.multiprocess = True
wsgi.multithread = True
Here's how I've tried to simulate the attack using raw http requests and netcat:
me#linuxmachine:~$ cat raw.http
GET /dashboard/ HTTP/1.1
Host: serverA.com
Host: test.com
Connection: close
me#linuxmachine:~$ cat raw.http | nc A.B.C.D 80
HTTP/1.1 400 Bad Request
Server: nginx/1.18.0 (Ubuntu)
Date: Fri, 27 Jan 2023 15:05:13 GMT
Content-Type: text/html
Content-Length: 166
Connection: close
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx/1.18.0 (Ubuntu)</center>
</body>
</html>
If I send my correct serverA.com as the host header, I get a 301 (redirecting to https).
If I send an incorrect host header (e.g. test.com) I get an empty response (expected).
If I send two host headers (correct and incorrect) I get a 400 bad request
If I send the correct host, but to port 443, I get a 400 plain HTTP sent to HTTPS port...
How do I simulate a request to get past nginx to my upstream serverA like the bots do? And how do I block it with nginx?
Thanks!
There is something magical about asking SO. The process of writing makes the answer appear :)
To my first question above, of simulating the spoof, I was able to just use curl in the following way:
me#linuxmachine:~$ curl -H "Host: A.B.C.D" https://example.com
I'm pretty sure I've tried this before but not sure why I didn't try this exact spell (perhaps I was sending a different header, like Http-Host: or something)
With this call, I was able to trigger the error as before, which made it easy to test the nginx configuration and answer the second question.
It was clear that the spoof was coming on 443, which led me to this very informative post on StackExchange
This also explained why we can't just listen 443 and respond with a 444 without first having traded SSL certificates due to the way SSL works.
The three options suggested (happrox, fake cert, and the if($host ...) directive might all work, but the simplest I think is the last one. Since this if( ) is not within the location context, I believe this to be ok.
My new serverA block looks like this:
server {
server_name serverA.com www.serverA.com;
client_max_body_size 10M;
## This fixes it
if ( $http_host !~* ^(serverA\.com|www\.serverA\.com)$ ) {
return 444;
}
## and it's not inside the location context...
location / {
proxy_pass http://upstream;
proxy_http_version 1.1;
...
I have configured Nginx as reverse proxy server for my Nifi Application running in the backend on port 9443;
**
Here goes my Nginx conf:
**
worker_processes 1;
events { worker_connections 1024; }
map_hash_bucket_size 128;
sendfile on;
large_client_header_buffers 4 64k;
upstream nifi {
server cloud-analytics-test2-nifi-a.insights.io:9443;
}
server {
listen 443 ssl;
#ssl on;
server_name nifi-test-nginx.insights.io;
ssl_certificate /etc/nginx/cert1.pem;
ssl_certificate_key /etc/nginx/privkey1.pem;
ssl_client_certificate /etc/nginx/nifi-client.pem; (this contains server cert and ca cert)
ssl_verify_client on;
ssl_verify_depth 2;
error_log /var/log/nginx/error.log debug;
proxy_ssl_certificate /etc/nginx/cert1.pem;
proxy_ssl_certificate_key /etc/nginx/privkey1.pem;
proxy_ssl_trusted_certificate /etc/nginx/nifi-client.pem;
location / {
proxy_pass https://nifi;
proxy_set_header X-ProxyScheme https;
proxy_set_header X-ProxyHost nifi-test-nginx.insights.io;
proxy_set_header X-ProxyPort 443;
proxy_set_header X-ProxyContextPath /;
proxy_set_header X-ProxiedEntitiesChain "<$ssl_client_s_dn>";
proxy_set_header X-SSL-CERT $ssl_client_escaped_cert;
}
}
}
When ever I try to access Nifi using Nginx Reverse Proxy Address/hostname I am getting below error.
client SSL certificate verify error: (2:unable to get issuer certificate) while reading client request headers,
As my tcpclient is behind nginx.
My idea is to have backend tcpclient, authenticate the external server.
Finally, i had this configuration:
#secured TCP part
stream {
log_format main '$remote_addr - - [$time_local] protocol $status $bytes_sent $bytes_received $session_time "$upstream_addr"';
server {
listen 127.0.0.1:10515 ;
proxy_pass REALIP:REALPORT;
proxy_ssl on;
#server side authentication (client verifying server's certificate)
proxy_ssl_protocols TLSv1.2;
proxy_ssl_certificate /f0/client.crt;
proxy_ssl_certificate_key /f0/client.key;
access_log /var/log/nginx.tcp1.access.log main;
error_log /var/log/nginx.tcp1.error.log debug;
#The trusted CA certificates in the file named by the proxy_ssl_trusted_certificate directive are used to verify the certificate on the ups
tream
proxy_ssl_verify on;
proxy_ssl_trusted_certificate /f0/client_ca.crt;
proxy_ssl_verify_depth 1;
proxy_ssl_session_reuse on;
proxy_ssl_name localhost;
ssl_session_timeout 4h;
ssl_handshake_timeout 30s;
ssl_prefer_server_ciphers on;
#proxy_ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-A
ES256-SHA384:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384";
proxy_ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:EECDH+AESGCM:EDH
+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-R
SA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:D
HE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA";
#ssl_ecdh_curve prime256v1:secp384r1;
#ssl_session_tickets off;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
}
Can you please authenticate the above configuration and pls let me know, if I am doing right way of verifying the peer.
If yes, my other question:
Is there a way, we can give peer certificate(public key) rather than its CA and verify.
Pls clarify
Thanks,
The question looks obvious as it doesn't mention any specific aspects (apart from TLS).
Please visit the following to get a good start:
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/
https://www.nginx.com/blog/tcp-load-balancing-udp-load-balancing-nginx-tips-tricks/
Quoting simple example configuration from the above sources:
stream {
server {
listen 12345;
#TCP traffic will be forwarded to the "stream_backend" upstream group
proxy_pass stream_backend;
}
server {
listen 12346;
#TCP traffic will be forwarded to the specified server
proxy_pass backend.example.com:12346;
}
}
For example:
I want to use a domain reverse proxy https://tw.godaddy.com, is this possible?
My config does not work.
location ~ /
{
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://tw.godaddy.com;
proxy_set_header Host "tw.godaddy.com";
proxy_set_header Accept-Encoding "";
proxy_set_header User-Agent $http_user_agent;
#more_clear_headers "X-Frame-Options";
sub_filter_once off;
}
Yes. It is possible.
Requirements:
Compiled with --with-stream
Compiled with --with-stream_ssl_module
You can check that with nginx -V
Configuration example:
stream {
upstream backend {
server backend1.example.com:12345;
server backend2.example.com:12345;
server backend3.example.com:12345;
}
server {
listen 12345;
proxy_pass backend;
proxy_ssl on;
proxy_ssl_certificate /etc/nginx/nginxb.crt;
proxy_ssl_certificate_key /etc/nginx/nginxb.key;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
proxy_ssl_ciphers HIGH:!aNULL:!MD5;
proxy_ssl_trusted_certificate /etc/ssl/certs/trusted_ca_cert.crt;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
}
}
Explaination:
Turn on ssl backend:
proxy_ssl on;
Specify the path to the SSL client certificate required by the upstream server and the certificate’s private key:
proxy_ssl_certificate /etc/nginx/nginxb.crt;
proxy_ssl_certificate_key /etc/nginx/nginxb.key;
These client key/certificates are your certificates to start ssl session to backend. you can create self signed via:
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/nginxb.key -out /etc/nginx/nginxb.crt
If backend is selfsigned too turn off proxy_ssl_verify and remove ssl depth.
We have 2 domains pointing to the same public AWS ELB and behind that ELB we have nginx, which will redirect requests to the correct server.
When we hit sub.domainA.com in the Browser (Chrome/Safari/etc), everything works, but when we use tools like openssl, we get a cert error:
openssl s_client -host sub.domainA.com -port 443 -prexit -showcerts
CONNECTED(00000003)
depth=0 /OU=Domain Control Validated/CN=*.domainB.com
verify error:num=20:unable to get local issuer certificate
verify return:1
For some reason, domainA is using domainB certs and I have no idea why.
I'm almost 100% sure the issue is with our nginx config (and more specifically, not having a default server block)
Here is our nginx config:
worker_processes auto;
error_log /var/log/nginx/error.log;
error_log /var/log/nginx/error.log warn;
error_log /var/log/nginx/error.log notice;
error_log /var/log/nginx/error.log info;
events {
worker_connections 1024;
}
http {
include /usr/local/openresty/nginx/conf/mime.types;
default_type application/octet-stream;
...
#
# DomainB
#
server {
ssl on;
ssl_certificate /etc/nginx/domainB.crt;
ssl_certificate_key /etc/nginx/domainB.key;
listen 8080;
server_name *.domainB.com;
access_log /var/log/nginx/access.log logstash_json;
error_page 497 301 =200 https://$host$request_uri;
set $target_web "web.domainB_internal.com:80";
location / {
keepalive_timeout 180;
resolver 10.10.0.2 valid=30s;
proxy_set_header Host $host;
proxy_pass http://$target_web;
proxy_set_header X-Unique-ID $request_id;
}
}
#
# DomainA
#
server {
ssl on;
ssl_certificate /etc/nginx/domainA.crt;
ssl_certificate_key /etc/nginx/domainA.key;
listen 8080;
server_name *.domainA.com;
access_log /var/log/nginx/access.log logstash_json;
error_page 497 301 =200 https://$host$request_uri;
set $target_web "web.domainA_internal.com:80";
location / {
keepalive_timeout 180;
resolver 10.10.0.2 valid=30s;
proxy_set_header Host $host;
proxy_pass http://$target_web;
proxy_set_header X-Unique-ID $request_id;
}
}
}
It shouldn't even be falling in the domainB block at all!
Yet we see it when using "openssl s_client", but not in the browser.
Any ideas why we see domainB at all when using "openssl s_client -host sub.domainA.com"?
Very similar to Openssl shows a different server certificate while browser shows correctly
Very helpful website: https://tech.mendix.com/linux/2014/10/29/nginx-certs-sni/
You need to specify the servername option in your openssl command.
From the openssl s_client docs:
-servername name
Set the TLS SNI (Server Name Indication) extension in the ClientHello
message.
So try something like
openssl s_client -connect sub.domainA.com:443 -showcerts -servername sub.domainA.com