Global variable on NGINX - nginx

I'm using NGINX to serve static files, reverse proxy to Django and verify client certificates.
I don't want the certificate to be asked at the url root, so I created another server on Nginx.conf to ask for the certificate on port 8443. This server is intended just to ask for the certificate and redirect the client back to port 443, where the reverse proxy to Django occurs.
This is my Nginx.conf:
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
sendfile on;
keepalive_timeout 65;
gzip on;
upstream app {
server django:8000;
}
# Redirect from HTTP to HTTPS
server {
listen *:80;
server_name localhost;
return 301 https://localhost$request_uri;
}
server {
listen *:443 ssl;
server_name localhost;
ssl_certificate /etc/nginx/ssl/certificate.pem;
ssl_certificate_key /etc/nginx/ssl/certificate.key;
ssl_password_file /etc/nginx/ssl/certificate.pass;
ssl_verify_client off;
ssl_client_certificate /etc/nginx/ssl/chain.cer;
ssl_verify_depth 3;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH;
ssl_prefer_server_ciphers on;
server_tokens off;
underscores_in_headers on;
location /static/ {
autoindex off;
alias /static_files/;
}
location / {
try_files $uri $uri/ #app_web;
}
location #app_web {
proxy_pass http://app;
proxy_pass_request_headers on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header CERTINFO $http_certinfo;
proxy_redirect off;
}
}
server {
listen *:8443 ssl;
server_name localhost;
ssl_certificate /etc/nginx/ssl/certificate.pem;
ssl_certificate_key /etc/nginx/ssl/certificate.key;
ssl_password_file /etc/nginx/ssl/certificate.pass;
ssl_verify_client on;
ssl_client_certificate /etc/nginx/ssl/certificate.cer
ssl_verify_depth 3;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH;
ssl_prefer_server_ciphers on;
add_header CERTINFO $ssl_client_s_dn;
return 301 https://$host$request_uri;
}
}
When the client authenticates on port 8443 and I return him to port 443, I need to forward his certificate information, the $ssl_client_s_dn to be more specific. To do so, I'm trying to use the method add_header on the server of the port 8443 (CERTINFO) and the method proxy_set_header capturing the value with $http_certinfo on the server of the port 443. But this solution is not working. The header is not forwarded from port 8443 to port 443 server.
My question is: is there a way to do that? Can I set some kind of "global" variable on the http block, change its value on port 8443 and than use the updated value on port 443 to forward it to Django?
Thank you so much!

Related

nginx: how to set up server blocks to allow subdomain

I have openresty running on my aws instance (Instance A) and the ip address of the server is already tied to the domain name myapp.john.com.
My app is running on another aws instance (Instance B) within the same private network. It has private ip address of 192.42.56.87 and the app is running on port :80.
I want to set up my openresty / nginx such that when visiting prod.myapp.john.com, nginx directs me to 192.42.56.87:80. And when visiting test.myapp.john.com, nginx directs me to another instance (Instance C) running the test version of my app, say on 192.xx.xx.xx:80
Below are code in (Instance A):
Main config file /usr/local/openresty/nginx/conf/nginx.conf is defined as:
# Main NGNX Config File
#user www-data;
worker_processes auto;
pid logs/nginx.pid;
error_log logs/error.log info;
error_log logs/error.log notice;
error_log logs/error.log debug;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
keepalive_requests 100000;
resolver 8.8.8.8 valid=30s ipv6=off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
gzip on;
# Include all the sites for the domain
include /usr/local/openresty/nginx/sites/*;
}
/usr/local/openresty/nginx/sites/prod.myapp.john.com is defined as:
server {
listen 80;
listen [::]:80;
server_name prod.myapp.john.com; // this does not work; but "myapp.john.com" works
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name prod.myapp.john.com; // this does not work; but "myapp.john.com" works
ssl_certificate /etc/letsencrypt/live/myapp.john.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp.john.com/privkey.pem;
location / {
proxy_pass http://192.42.56.87:80/;
expires 0;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
}
Now, in chrome browser when I visit prod.myapp.john.com, there is no response at all since the request never get to my Instance A;
However, if I change
server_name prod.myapp.john.com
to
server_name myapp.john.com
it works and the web page gets rendered.
Why?
How can I include more site files in /usr/local/openresty/nginx/sites/ and set the server blocks in config correctly to provide more subdomains on my site?

Nginx proxying TCP Socket Connections is altering original IP

I am using the current configuration and running it as root:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
stream {
upstream web {
server 127.0.0.1:8445;
}
upstream tcpsocket {
server 127.0.0.1:8444;
}
map $ssl_preread_alpn_protocols $upstream {
"" tcpsocket;
default web;
}
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 8445 ssl;
server_name 127.0.0.1;
ssl_certificate_key /etc/nginx/cert.key;
ssl_certificate /etc/nginx/cert.crt;
ssl_protocols TLSv1.2;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://localhost:8443;
proxy_read_timeout 90;
proxy_redirect http://localhost:8443 https://127.0.0.1;
}
}
}
With a requirement to have only one port open (443), I am trying to:
Dispatching http requests to 127.0.0.1:8443 eventually after handling ssl
All "other" requests (I only expect tcp socket connections) to a tcp socket server running at 127.0.0.1:8444
This configuration is working perfectly except that the caller IP changes to 127.0.0.1 for tcp connections.
I don't care about the http caller IP.
I tried the following solutions:
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
proxy_bind $remote_addr:$remote_port transparent;
}
This causes upstream timed out (110: Connection timed out) while proxying connection
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
set_real_ip_from 10.0.0.0/8;
real_ip_header X-Real-IP;
real_ip_recursive on;
}
real_ip settings do not work inside the stream block
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
proxy_protocol on;
}
This causes nginx to fail with error broken header while reading PROXY protocol
Please help!

SignalR SSL trust issue with reverse proxy(Nginx) for Android

I am facing issue with SignalR using reverse proxy(Nginx) which has Certificates(Server, client and CA) and these certificates are self signed certificates, generated using OPENSSL.
Nginx Configuration is as below.
http
{
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request"
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
error_log "C:/Error/error1.log" debug;
listen 8080;
server_name localhost;
location /MyService
{
root html;
index index.html index.htm;
proxy_pass http://XX.XX.XX.XX/MyService;
}
location /
{
root html;
index index.html index.htm;
proxy_pass http://XX.XX.XX.XX/MyService;
}
location /MyService/signalr/
{
root html;
index index.html index.htm;
proxy_pass http://XX.XX.XX.XX/MyService/signalr/;
}
}
server {
error_log "C:/Error/error1.log" debug;
listen 443 ssl;
server_name localhost;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_certificate "C:\Certificates\server.crt";
ssl_certificate_key "C:\Certificates\server.key";
ssl_client_certificate "C:\Certificates\ca.crt";
ssl_verify_client on;
location /
{
root html;
index index.html index.htm;
proxy_pass http://XX.XX.XX.XX/MyService;
}
location /MyService
{
root html;
index index.html index.htm;
proxy_pass http://XX.XX.XX.XX/MyService;
}
location /MyService/signalr
{
root html;
index index.html index.htm;
proxy_pass http://XX.XX.XX.XX/MyService/signalr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}
I have an Android APP which enforce the user to select the client certificate(P12 certificate, which has CA and Client certificates packaged) for authentication through nginx and route it to server XX.XX.XX.XX. I have a logic which will successfully establishes the connection to the XX.XX.XX.XX server through Nginx. Authentication to server XX.XX.XX.XX through Nginx is successful.
I am facing issues with signalR in this scenario, wherein I am making a hubconnection to the server XX.XX.XX.XX like $"https://{hostname}:{port}/MyService/signalr/". When I try to start the hubconnection
_hubConnection.Start().Wait(); Getting error as One or more errors occurred. (The SSL connection could not be established, see inner exception.) because of which my hub is not receiving any notfication from the server.
I have Referred the following links and tried the suggestions as suggested. Unfortunately I am not getting any breakthrough. Request your help on this.
https://forums.asp.net/t/2029050.aspx?SignalR+Unable+to+Authentication+with+Certificate
https://learn.microsoft.com/en-us/aspnet/signalr/overview/security/hub-authorization
SignalR - Could not establish trust relationship for the SSL/TLS secure channel
https://forums.asp.net/t/2095053.aspx?SignalR+and+reverse+proxy

Nginx is not able to handle a large number of requests

I have an scenario on where a nginx is in front of an Artifactory server.
Recently, while trying to pull a big number of docker images in a for loop, all at the same time (first test was with 200 images, second test with 120 images), access to Artifactory gets blocked, as nginx is busy processing all the requests and users will not be able to reach it.
My nginx server is running with 4 cpu cores and 8192 of ram.
I have tried to improve the handling of files in the server, by adding the bellow:
sendfile on;
sendfile_max_chunk 512k;
tcp_nopush on;
This made it a bit better (but of course, pull's of images with 1gb+ take much more time, due to the chunk size) - still, access to the UI would cause a lot of timeouts.
Is there something else that i can do to improve the nginx performance, whenever a bigger load is pushed thru it?
I think that my last option is to increase the size of the machine (more cpu's) aswell as the number of processes on nginx (8 to 16).
The full nginx.conf file follows bellow:
user www-data;
worker_processes 8;
pid /var/run/nginx.pid;
events {
worker_connections 19000;
}
http {
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
include /etc/nginx/mime.types;
default_type application/octet-stream;
gzip on;
gzip_disable "msie6";
sendfile on;
sendfile_max_chunk 512k;
tcp_nopush on;
set_real_ip_from 138.190.190.168;
real_ip_header X-Forwarded-For;
log_format custome '$remote_addr - $realip_remote_addr - $remote_user [$time_local] $request_time'
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
server {
listen 80 default;
listen [::]:80 default;
server_name _;
return 301 https://$server_name$request_uri;
}
###########################################################
## this configuration was generated by JFrog Artifactory ##
###########################################################
## add ssl entries when https has been set in config
ssl_certificate /etc/ssl/certs/{{ hostname }}.cer;
ssl_certificate_key /etc/ssl/private/{{ hostname }}.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
## server configuration
server {
listen 443 ssl;
server_name ~(?<repo>.+)\.{{ hostname }} {{ hostname }} _;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
access_log /var/log/nginx/{{ hostname }}-access.log custome;
error_log /var/log/nginx/{{ hostname }}-error.log warn;
rewrite ^/$ /webapp/ redirect;
rewrite ^//?(/webapp)?$ /webapp/ redirect;
rewrite ^/(v1|v2)/(.*) /api/docker/$repo/$1/$2;
chunked_transfer_encoding on;
client_max_body_size 0;
location / {
proxy_read_timeout 900;
proxy_max_temp_file_size 10240m;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://{{ appserver }}:8081/artifactory/;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
Thanks for the tips.
Cheers,
Ricardo

https not redirecting to mongrel upstream

Normal http is working fine for me with nginx and mongrel, however when i attempt to use https I am directed to the "welcome to nginx page".
http {
# passenger_root /opt/passenger-2.2.11;
# passenger_ruby /usr/bin/ruby1.8;
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
upstream mongrel {
server 00.000.000.000:8000;
server 00.000.000.000:8001;
}
server {
listen 443;
server_name domain.com;
ssl on;
ssl_certificate /etc/ssl/localcerts/domain_combined.crt;
ssl_certificate_key /etc/ssl/localcerts/www.domain.com.key;
# ssl_session_timeout 5m;
# ssl_protocols SSLv2 SSLv3 TLSv1;
# ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
# ssl_prefer_server_ciphers on;
location / {
root /current/public/;
index index.html index.htm;
proxy_set_header X_FORWARDED_PROTO https;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://mongrel;
}
}
}
Do you have an explicit server entry for port 80? It could be that an nginx default directive is intercepting the regular HTTP traffic.
Add a another server block just to be sure:
server {
listen 80;
server_name domain.com www.domain.com;
rewrite ^(.*) https://domain.com$1 permanent;
}
This will redirect all traffic for your app to https, which even if it isn't what you ultimately want to happen, at least you know you're missing the non-https block and you can then replace this with the directives you need.

Resources