As my tcpclient is behind nginx.
My idea is to have backend tcpclient, authenticate the external server.
Finally, i had this configuration:
#secured TCP part
stream {
log_format main '$remote_addr - - [$time_local] protocol $status $bytes_sent $bytes_received $session_time "$upstream_addr"';
server {
listen 127.0.0.1:10515 ;
proxy_pass REALIP:REALPORT;
proxy_ssl on;
#server side authentication (client verifying server's certificate)
proxy_ssl_protocols TLSv1.2;
proxy_ssl_certificate /f0/client.crt;
proxy_ssl_certificate_key /f0/client.key;
access_log /var/log/nginx.tcp1.access.log main;
error_log /var/log/nginx.tcp1.error.log debug;
#The trusted CA certificates in the file named by the proxy_ssl_trusted_certificate directive are used to verify the certificate on the ups
tream
proxy_ssl_verify on;
proxy_ssl_trusted_certificate /f0/client_ca.crt;
proxy_ssl_verify_depth 1;
proxy_ssl_session_reuse on;
proxy_ssl_name localhost;
ssl_session_timeout 4h;
ssl_handshake_timeout 30s;
ssl_prefer_server_ciphers on;
#proxy_ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-A
ES256-SHA384:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384";
proxy_ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:EECDH+AESGCM:EDH
+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-R
SA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:D
HE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA";
#ssl_ecdh_curve prime256v1:secp384r1;
#ssl_session_tickets off;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
}
Can you please authenticate the above configuration and pls let me know, if I am doing right way of verifying the peer.
If yes, my other question:
Is there a way, we can give peer certificate(public key) rather than its CA and verify.
Pls clarify
Thanks,
The question looks obvious as it doesn't mention any specific aspects (apart from TLS).
Please visit the following to get a good start:
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/
https://www.nginx.com/blog/tcp-load-balancing-udp-load-balancing-nginx-tips-tricks/
Quoting simple example configuration from the above sources:
stream {
server {
listen 12345;
#TCP traffic will be forwarded to the "stream_backend" upstream group
proxy_pass stream_backend;
}
server {
listen 12346;
#TCP traffic will be forwarded to the specified server
proxy_pass backend.example.com:12346;
}
}
Related
i get crt file from a partner, i want to let nginx do ssl connection so i follow this note : https://docs.nginx.com/nginx/admin-guide/security-controls/securing-http-traffic-upstream/ i don't have client pem file and client key file. How can i generate this files with crt to fill this directives:
location /upstream { proxy_pass https://backend.example.com; proxy_ssl_certificate /etc/nginx/client.pem;
proxy_ssl_certificate_key /etc/nginx/client.key;
}
actually my location :
location /api {
access_log /var/log/nginx/api.log upstream_logging ;
proxy_ssl_trusted_certificate /etc/nginx/partner.crt;
# proxy_ssl_certificate_key /etc/nginx/client.key;
# proxy_ssl_certificate /etc/nginx/client.pem;
proxy_ssl_verify off;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
proxy_ssl_server_name on;
#proxy_ssl_protocols TLSv1 ;
proxy_pass https://api$uri$is_args$args;
}
with this setting i get this error:
SSL_do_handshake() failed (SSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol) while SSL handshaking to upstream
how to get client.key? is this generated from crt file?
You do not need a client key.
Error SSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protoco probably indicates that the protocol was not chosen correctly in proxy_ssl_protocols
Official documentation nginx.org proxy_ssl_protocols
I have a somewhat messy setup (no choice) where a local computer is made available to the internet through port forwarding. It is only reachable through [public IP]:8000. I cannot get a Let's Encrypt certificate for an IP address, but the part of the app that will be accessed from the internet does not require encryption. So instead, I'm planning on making the app available from the internet at http://[public IP]:8000/, and from the local network at https://[local DNS name]/ (port 80). The certificate used in the latter is issued by our network's root CA. Clients within the network trust this CA.
Furthermore, some small changes are made to the layout of the page when accessed from the internet. These changes are made by setting an embedded query param.
In summary, I need:
+--------------------------+--------------------------+----------+--------------------------------------+
| Accessed using | Redirect to (ideally) | URL args | Current state |
+--------------------------+--------------------------+----------+--------------------------------------+
| http://a.b.c.d:8000 | no redirect | embedded | Arg not appended, redirects to HTTPS |
| http://localhost:8000 | no redirect | embedded | Arg not appended, redirects to HTTPS |
| http://[local DNS name] | https://[local DNS name] | no args | Working as expected |
| https://[local DNS name] | no redirect | no args | Working as expected |
+--------------------------+--------------------------+----------+--------------------------------------+
For the two top rows, I don't want the redirection to HTTPS, and I need ?embedded to be appended to the URL.
Here's my config:
upstream channels-backend {
server api:5000;
}
# Connections from the internet (no HTTPS)
server {
listen 8000;
listen [::]:8000;
server_name [PUBLIC IP ADDRESS] localhost;
keepalive_timeout 70;
access_log /var/log/nginx/access.log;
underscores_in_headers on;
location = /favicon.ico {
access_log off;
log_not_found off;
}
location /admin/ {
# Do not allow access to /admin/ from the internet.
return 404;
}
location /static/rest_framework/ {
alias /home/docker/backend/static/rest_framework/;
}
location /static/admin/ {
alias /home/docker/backend/static/admin/;
}
location /files/media/ {
alias /home/docker/backend/media/;
}
location /api/ {
proxy_pass http://channels-backend/;
}
location ~* (service-worker\.js)$ {
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
proxy_no_cache 1;
}
location / {
root /var/www/frontend/;
# I want to add "?embedded" to the URL if accessed through http://[public IP]:8000.
# I do not want to redirect to HTTPS.
try_files $uri $uri/ /$uri.html?embedded =404;
}
}
# Upgrade requests from local network to HTTPS
server {
listen 80;
keepalive_timeout 70;
access_log /var/log/nginx/access.log;
underscores_in_headers on;
server_name [local DNS name] [local IP] localhost;
# This works; it redirects to HTTPS.
return 301 https://$http_host$request_uri;
}
# Server for connections from the local network (uses HTTPS)
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name [local DNS name] [local IP] localhost;
ssl_password_file /etc/nginx/certificates/global.pass;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols TLSv1.2 TLSv1.1;
ssl_certificate /etc/nginx/certificates/certificate.crt;
ssl_certificate_key /etc/nginx/certificates/privatekey.key;
keepalive_timeout 70;
access_log /var/log/nginx/access.log;
underscores_in_headers on;
location = /favicon.ico {
access_log off;
log_not_found off;
}
location /admin/ {
proxy_pass http://channels-backend/admin/;
}
location /static/rest_framework/ {
alias /home/docker/backend/static/rest_framework/;
}
location /static/admin/ {
alias /home/docker/backend/static/admin/;
}
location /files/media/ {
alias /home/docker/backend/media/;
}
location /api/ {
# Proxy to backend
proxy_read_timeout 30;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;
proxy_redirect off;
proxy_pass http://channels-backend/;
}
# ignore cache frontend
location ~* (service-worker\.js)$ {
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
proxy_no_cache 1;
}
location / {
root /var/www/frontend/;
# Do not add "?embedded" argument.
try_files $uri $uri/ /$uri.html =404;
}
}
The server serves both the frontend and an API developed using React and Django RF, in case it matters. It's deployed using Docker.
Any pointers would be greatly appreciated.
Edit: I commented out everything except the first server (port 8000), and requests are still being redirected to https://localhost:8000 from http://localhost:8000. I don't understand why. I'm using an incognito tab to rule out cache as the problem.
Edit 2: I noticed that Firefox sets an Upgrade-Insecure-Requests header with the initial request to http://localhost:8000. How can I ignore this header and not upgrade insecure requests? This request was made by Firefox, and not the frontend application.
Edit 3: Please take a look at the below configuration, which I'm now using to try to figure out the issue. How can this possibly result in redirection from HTTP to HTTPS? There's now only one server block, and there's nothing here that could be interpreted as a wish to redirect to https://localhost:8000 from http://localhost:8000. Where does the redirect come from? Notice that I replaced some parts with redirects to Google, Yahoo and Facebook. I'm not redirected to any of these. I'm immediately upgraded to HTTPS, which should not be supported at all with this configuration. It's worth mentioning that the redirect ends in SSL_ERROR_RX_RECORD_TOO_LONG. The certificate is accepted when accessing https://localhost/ (port 80) using the original configuration.
upstream channels-backend {
server api:5000;
}
# Server for connections from the internet (does not use HTTPS)
server {
listen 8000;
listen [::]:8000 default_server;
server_name localhost [public IP];
keepalive_timeout 70;
access_log /var/log/nginx/access.log;
underscores_in_headers on;
ssl off;
location = /favicon.ico {
access_log off;
log_not_found off;
}
location /admin/ {
# Do not allow access to /admin/ from the internet.
return 404;
}
location /static/rest_framework/ {
alias /home/docker/backend/static/rest_framework/;
}
location /static/admin/ {
alias /home/docker/backend/static/admin/;
}
location /files/media/ {
alias /home/docker/backend/media/;
}
location /api/ {
proxy_pass http://channels-backend/;
}
location / {
if ($args != "embedded") {
return 301 https://google.com;
# return 301 http://$http_host$request_uri?embedded;
}
return 301 https://yahoo.com;
# root /var/www/frontend/;
# try_files $uri $uri/ /$uri.html =404;
}
}
Boy, do I feel stupid.
In my docker-compose.yml file, I had accidentally mapped port 8000 to 80:
nginx-server:
image: nginx-server
build:
context: ./
dockerfile: .docker/dockerfiles/NginxDockerfile
restart: on-failure
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
- "0.0.0.0:8000:80" # Oops
So any request on port 8000 was received by nginx as a request on port 80. Thus, even a simple config like...
server {
listen 8000;
return 301 https://google.com;
}
... would result in an attempt to upgrade to HTTPS (causes include unexpected caching of redirects, possibly default behavior, etc.) on port 80. I was thoroughly confused, but fixing my compose instructions fixed the problem:
nginx-server:
image: nginx-server
build:
context: ./
dockerfile: .docker/dockerfiles/NginxDockerfile
restart: on-failure
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
- "0.0.0.0:8000:8000" # Fixed
I wanted to add the certificate and key to the Nginx server that my application is served on and hosted by Heroku. This is what I currently have in my Nginx config file. Does proxying the SSL server work for this instead and keeps the server secure? If not then how am I supposed to get file names for the .pem and .key files that I uploaded to Heroku for my specific application?
nginx.conf.erb
daemon off;
#Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
events {
use epoll;
accept_mutex on;
worker_connections <%= ENV['NGINX_WORKER_CONNECTIONS'] || 1024 %>;
}
http {
server_tokens off;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log <%= ENV['NGINX_ACCESS_LOG_PATH'] || 'logs/nginx/access.log' %> l2met;
error_log <%= ENV['NGINX_ERROR_LOG_PATH'] || 'logs/nginx/error.log' %>;
include mime.types;
default_type text/html;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
#Must read the body in 65 seconds.
keepalive_timeout 65;
# handle SNI
proxy_ssl_server_name on;
upstream app_server {
server unix:/tmp/nginx.socket fail_timeout=0;
}
server {
listen <%= ENV["PORT"] %>;
server_name _;
# Define the specified charset to the “Content-Type” response header field
charset utf-8;
location / {
proxy_ssl_name <%= ENV["HEROKU_DOMAIN"] %>;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
client_max_body_size 5M;
}
location /static {
alias /app/flask_app/static;
}
}
}
If you create SSL certificate from CloudFlare, you can't access through Heroku CLI, but you can download it through CloudFlare.
Please check if you have routed your domain web through Configure-Cloudflare-and-Heroku-over-HTTPS.
Download the SSL Cert via CloudFlare.
Setup SSL cert for Nginx Setup SSL Cert.
Hope it helps.
EDIT
Put SSL cert .key and .pem into same folder with nginx.conf.erb. I.e. domain_name.key & domain_name.pem
Deploy to Heroku.
Use config like this:
ssl_certificate domain_name.pem;
ssl_certificate_key domain_name.key;
Disclaimer: this is technically related to a school project, but I've talked to my professor and he is also confused by this.
I have a nginx load balancer reverse proxying to several uwsgi + flask apps. The apps are meant to handle very high throughput/load. My response times from uwsgi are pretty good, and the nginx server has low CPU usage and load average, but the overall request time is extremely high.
I've looked into this issue and all the threads I've found say that this is always caused by the client having a slow connection. However, the requests are being made by a script on the same network, and this issue isn't affecting anyone else's setup, so it seems to me that it's a problem with my nginx config. This has me totally stumped though because it seems almost unheard of for nginx to be the bottleneck like this.
To give an idea of the magnitude of the problem, there are three primary request types: add image, search, and add tweet (it's a twitter clone).
For add image, the overall request time is ~20x longer than the upstream response time on average. For search, it's a factor of 3, and add tweet 1.5. My theory for the difference here is that the amount of data being sent back and forth is much larger for add image than either search or add tweet, and larger for search than add tweet.
My nginx.conf is:
user www-data;
pid /run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 30000;
events {
worker_connections 30000;
}
http {
# Settings.
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_body_buffer_size 200K;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# SSL.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
# Logging
log_format req_time '$remote_addr - $remote_user [$time_local] '
'REQUEST: $request '
'STATUS: $status '
'BACK_END: $upstream_addr '
'UPSTREAM_TIME: $upstream_response_time s '
'REQ_TIME: $request_time s ';
'CONNECT_TIME: $upstream_connect_time s';
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log req_time;
# GZIP business
gzip on;
gzip_disable "msie6";
# Routing.
upstream media {
server 192.168.1.91:5000;
}
upstream search {
least_conn;
server 192.168.1.84:5000;
server 192.168.1.134:5000;
}
upstream uwsgi_app {
least_conn;
server 192.168.1.85:5000;
server 192.168.1.89:5000;
server 192.168.1.82:5000;
server 192.168.1.125:5000;
server 192.168.1.86:5000;
server 192.168.1.122:5000;
server 192.168.1.128:5000;
server 192.168.1.131:5000;
server 192.168.1.137:5000;
}
server {
listen 80;
server_name localhost;
location /addmedia {
include uwsgi_params;
uwsgi_read_timeout 5s;
proxy_read_timeout 5s;
uwsgi_pass media;
}
location /media {
include uwsgi_params;
uwsgi_read_timeout 5s;
proxy_read_timeout 5s;
uwsgi_pass media;
}
location /search {
include uwsgi_params;
uwsgi_read_timeout 5s;
proxy_read_timeout 5s;
uwsgi_pass search;
}
location /time-search {
rewrite /time-search(.*) /times break;
include uwsgi_params;
uwsgi_pass search;
}
location /item {
include uwsgi_params;
uwsgi_read_timeout 5s;
proxy_read_timeout 5s;
if ($request_method = DELETE) {
uwsgi_pass media;
}
if ($request_method = GET) {
uwsgi_pass uwsgi_app;
}
if ($request_method = POST) {
uwsgi_pass uwsgi_app;
}
}
location / {
include uwsgi_params;
uwsgi_read_timeout 5s;
proxy_read_timeout 5s;
uwsgi_pass uwsgi_app;
}
}
}
And my uwsgi ini is:
[uwsgi]
chdir = /home/ubuntu/app/
module = app
callable = app
master = true
processes = 25
socket = 0.0.0.0:5000
socket-timeout = 5
die-on-term = true
home = /home/ubuntu/app/venv/
uid = 1000
buffer-size=65535
single-interpreter = true
Any insights as to the cause of this problem would be greatly appreciated.
So, I think I figured this out. From reading the nginx docs (https://www.nginx.com/blog/using-nginx-logging-for-application-performance-monitoring/) it seems that there are three metrics to pay attention to: upstream_response_time, request_time, and upstream_connect_time. I was focused on the difference between upstream_response_time and request_time.
However, upstream_response_time is the time between the upstream accepting the request and returning a response. It doesn't include upstream_connect time, or the time it takes to establish a connection to upstream server. And in the context of uwsgi, this is very important, because if there isn't a worker available to accept a request, the request will get put on a backlog. I think the time a request waits on the backlog might count as upstream_connect_time, not upstream_response_time in nginx, because uwsgi hasn't read any of the bytes yet.
Unfortunately, I can't be 100% certain, because I never got a "slow" run where I was logging upstream_connect_time. But the only things I changed that improved my score were just "make the uwsgi faster" changes (devote more cores to searching, increase replication factor in the DB to make searches faster)... So yeah, turns out the answer was just to increase throughput for the apps.
We have 2 domains pointing to the same public AWS ELB and behind that ELB we have nginx, which will redirect requests to the correct server.
When we hit sub.domainA.com in the Browser (Chrome/Safari/etc), everything works, but when we use tools like openssl, we get a cert error:
openssl s_client -host sub.domainA.com -port 443 -prexit -showcerts
CONNECTED(00000003)
depth=0 /OU=Domain Control Validated/CN=*.domainB.com
verify error:num=20:unable to get local issuer certificate
verify return:1
For some reason, domainA is using domainB certs and I have no idea why.
I'm almost 100% sure the issue is with our nginx config (and more specifically, not having a default server block)
Here is our nginx config:
worker_processes auto;
error_log /var/log/nginx/error.log;
error_log /var/log/nginx/error.log warn;
error_log /var/log/nginx/error.log notice;
error_log /var/log/nginx/error.log info;
events {
worker_connections 1024;
}
http {
include /usr/local/openresty/nginx/conf/mime.types;
default_type application/octet-stream;
...
#
# DomainB
#
server {
ssl on;
ssl_certificate /etc/nginx/domainB.crt;
ssl_certificate_key /etc/nginx/domainB.key;
listen 8080;
server_name *.domainB.com;
access_log /var/log/nginx/access.log logstash_json;
error_page 497 301 =200 https://$host$request_uri;
set $target_web "web.domainB_internal.com:80";
location / {
keepalive_timeout 180;
resolver 10.10.0.2 valid=30s;
proxy_set_header Host $host;
proxy_pass http://$target_web;
proxy_set_header X-Unique-ID $request_id;
}
}
#
# DomainA
#
server {
ssl on;
ssl_certificate /etc/nginx/domainA.crt;
ssl_certificate_key /etc/nginx/domainA.key;
listen 8080;
server_name *.domainA.com;
access_log /var/log/nginx/access.log logstash_json;
error_page 497 301 =200 https://$host$request_uri;
set $target_web "web.domainA_internal.com:80";
location / {
keepalive_timeout 180;
resolver 10.10.0.2 valid=30s;
proxy_set_header Host $host;
proxy_pass http://$target_web;
proxy_set_header X-Unique-ID $request_id;
}
}
}
It shouldn't even be falling in the domainB block at all!
Yet we see it when using "openssl s_client", but not in the browser.
Any ideas why we see domainB at all when using "openssl s_client -host sub.domainA.com"?
Very similar to Openssl shows a different server certificate while browser shows correctly
Very helpful website: https://tech.mendix.com/linux/2014/10/29/nginx-certs-sni/
You need to specify the servername option in your openssl command.
From the openssl s_client docs:
-servername name
Set the TLS SNI (Server Name Indication) extension in the ClientHello
message.
So try something like
openssl s_client -connect sub.domainA.com:443 -showcerts -servername sub.domainA.com