HTTP forwarding Plaintext warning (nginx) - http

So I was updating my nginx configuration to meet some security checks, to get grade A+ from Qualys SSL Labs.
I do get A+, but one of the warnings I don't understand. I get this:
Though, I do have http redirect to https and it seems its working fine. Does anyone know what would be the cause of this warning?
I found this question: HTTP forwarding PLAINTEXT warning but it talks about apache.
I also found this: https://github.com/ssllabs/ssllabs-scan/issues/154 so it was mentioned that it could be just a bug, but now its not clear (that issue is old).
My nginx configuration:
http {
upstream odoo-upstream {
server odoo:8069 weight=1 fail_timeout=0;
}
upstream odoo-im-upstream {
server odoo:8072 weight=1 fail_timeout=0;
}
# Basic Settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Enable SSL session caching for improved performance
# http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache
ssl_session_cache shared:ssl_session_cache:5m;
ssl_session_timeout 24h; # time which sessions can be re-used.
# Because the proper rotation of session ticket encryption key is
# not yet implemented in Nginx, we should turn this off for now.
ssl_session_tickets off;
# Default size is 16k, reducing it can slightly improve performance.
ssl_buffer_size 8k;
# Gzip Settings
gzip on;
# http redirects to https
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
charset utf-8;
server {
# server port and name
listen 443 ssl http2;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options sameorigin;
add_header X-Content-Type-Options nosniff;
add_header X-Xss-Protection "1; mode=block";
# Specifies the maximum accepted body size of a client request,
# as indicated by the request header Content-Length.
client_max_body_size 200m;
# add ssl specific settings
keepalive_timeout 60;
ssl_certificate /etc/ssl/nginx/domain.bundle.crt;
ssl_certificate_key /etc/ssl/nginx/domain.key;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.4.4 8.8.8.8;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
# increase proxy buffer to handle some Odoo web requests
proxy_buffers 16 64k;
proxy_buffer_size 128k;
#general proxy settings
# force timeouts if the backend dies
proxy_connect_timeout 600s;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
# set headers
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
# Let the Odoo web service know that we’re using HTTPS, otherwise
# it will generate URL using http:// and not https://
proxy_set_header X-Forwarded-Proto $scheme;
# by default, do not forward anything
proxy_redirect off;
proxy_buffering off;
location / {
proxy_pass http://odoo-upstream;
}
location /longpolling {
proxy_pass http://odoo-im-upstream;
}
# cache some static data in memory for 60mins.
# under heavy load this should relieve stress on the Odoo web interface a bit.
location /web/static/ {
proxy_cache_valid 200 60m;
proxy_buffering on;
expires 864000;
proxy_pass http://odoo-upstream;
}
include /etc/nginx/custom_error_page.conf;
}
include /etc/nginx/conf.d/*.conf;
}
events {
worker_connections 1024;
}

Related

CORS Header missing in Firefox although add_header is used

I have an application that consists of back end and front end. Because of restrictions with the hoster, I need to provide the back end from a different server than the front end.
My back end handles authentication and serves the content to the front end. It also sends emails to users via nodemailer. Because I am not allowed have outgoing TCP sockets on the server where the front end is hosted, this feature failed which made me relocate the back end.
Now, I have the back end running on a different server. It consists of a loopback instance listening on a certain port which gets requests proxied to it by nginx.
After a while of set up, I had the configuration working. It first failed because of a wrong CORS header, a problem that emerged because Loopback added Access-Control-Allow-Origin *;, which I also had in my nginx config. That resulted in Firefox throwing an error like CORS header does not match Origin (*, *) - which made me think that the headers where on top of each other thus negating the wildcard *.
So I removed the add_header part from my nginx configuration. I worked fine when I tested, but when I came back, Firefox threw the Error Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://{{my_nice_api}}/lang. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 404.. Which baffled me because I hadn't changed the set up at all.
Now, I fiddled even more but am not able to find the error. I have add_header Access-Control-Allow-Origin *; set (for testing purposes obviously), but I keep getting the error that there is no such header present. This post had me thinking that I needed to add another header in Access-Control-Allow-Credentials true;, but to no avail. Can anybody give any pointer as to what I am missing?
My nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
#ssl_certificate /etc/nginx/certs/cert.pem;
#ssl_certificate_key /etc/nginx/certs/key.pem;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
My site.conf (mounted into sites-available):
# Virtual Host configuration for {{my_nice_api}}
#
server {
listen 80;
listen [::]:80;
server_name {{my_nice_api}};
return 301 https://{{my_nice_api}}$uri;
#location / {
# rewrite ^ https://{{my_nice_api}}$request_uri permanent;
#}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /etc/nginx/certs/cert.pem;
ssl_certificate_key /etc/nginx/certs/key.pem;
#server_name {{my_nice_api}};
location / {
proxy_pass http://localhost:3001/api/;
#proxy_pass_request_headers on;
#proxy_http_version 1.1;
#proxy_cache_bypass $http_upgrade;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-Forwarded-Proto $scheme;
#proxy_set_header X-Forwarded-Host $host;
#proxy_set_header X-Forwarded-Port $server_port;
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Credentials true;
#add_header X-Frame-Options SAMEORIGIN;
}
}

How to fix Error 400 Hook should contain payload for Jenkins running behind Nginx server

Jenkins is running behind Nginx server on CentOS virtual machine. I am able to
access Jenkins via web interface in a web browser. Since I want to trigger the automatic builds when the code is pushed to the GitHub repository I have defined a Github repository web hook.
Then I edited the NGINX config file
/etc/nginx/nginx.conf
by adding the location with:
location /github-webhook {
proxy_pass http://localhost:8080/github-webhook;
proxy_method POST;
proxy_connect_timeout 150;
proxy_send_timeout 100;
proxy_read_timeout 100;
proxy_buffers 4 32k;
client_max_body_size 8m;
client_body_buffer_size 128k;
}
But when Github sends a POST request Jenkins sends back 400 Hook should contain payload response. Is there anything I could do to solve this issue?
Below is the complete Nginx config file (the domain name has been changed to xyz.com):
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
upstream jenkins{
server 127.0.0.1:8080;
keepalive 16;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name xyz.com;
ssl_certificate /etc/letsencrypt/live/xyz.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/xyz.com/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
# intermediate configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;
# OCSP stapling
ssl_stapling on;
ssl_stapling_verify on;
# replace with the IP address of your resolver
resolver 127.0.0.1;
ignore_invalid_headers off;
location /github-webhook {
proxy_pass http://localhost:8080/github-webhook;
proxy_method POST;
proxy_connect_timeout 150;
proxy_send_timeout 100;
proxy_read_timeout 100;
proxy_buffers 4 32k;
client_max_body_size 8m;
client_body_buffer_size 128k;
}
location / {
proxy_pass http://jenkins;
# we want to connect to Jenkins via HTTP 1.1 with keep-alive connections
proxy_http_version 1.1;
# has to be copied from server block,
# since we are defining per-location headers, and in
# this case server headers are ignored
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# no Connection header means keep-alive
proxy_set_header Connection "";
# Jenkins will use this header to tell if the connection
# was made via http or https
proxy_set_header X-Forwarded-Proto $scheme;
# increase body size (default is 1mb)
client_max_body_size 10m;
# increase buffer size, not sure how this impacts Jenkins, but it is recommended
# by official guide
client_body_buffer_size 128k;
# block below is for HTTP CLI commands in Jenkins
# increase timeouts for long-running CLI commands (default is 60s)
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
# disable buffering
proxy_buffering off;
proxy_request_buffering off;
}
}
}
And here is the GitHub webhook settings:
In Jenkins projects configuration Github was configured as:
The problem was solved by setting Jenkins URL field with http://localhost:8080/ instead of being xyz.com:8080/. You can can access this field by going to Jenkins > Manage Jenkins > Configure System

Why does NGINX load balancer passive health check not detect when upstream server is offline?

I have an upstream block in an Nginx config file. This block lists multiple backend servers across which to load balance requests to.
...
upstream backend {
server backend1.com;
server backend2.com;
server backend3.com;
}
...
Each of the above 3 backend servers is running a Node application.
If I stop the application process on backend1 - Nginx recognises this, via passive health check, traffic is only directed to backend2 and backend3, as expected.
However, if I power down the server on which backend1 is hosted, Nginx does not recognise that it is offline and continues to attempt to send traffic/requests to it. Nginx still tries to direct traffic to the offline server, resulting in an error: 504.
Can someone shed some light on why this (scenario 2 above) may happen and if there is some further configuration needed that I am missing?
Update:
I'm beginning to wonder if the behaviour I'm seeing is because the above upstream block is located with an HTTP {} Nginx context. If backend1 was indeed powered down, this would be a connection error and so (maybe off the mark here, but just thinking aloud) should this be a TCP health check?
Update 2:
nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
upstream backends {
server xx.xx.xx.37:3000 fail_timeout=2s;
server xx.xx.xx.52:3000 fail_timeout=2s;
server xx.xx.xx.69:3000 fail_timeout=2s;
}
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_certificate …
ssl_certificate_key …
ssl_ciphers …;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
default
server {
listen 80;
listen [::]:80;
return 301 https://$host$request_uri;
#server_name ...;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
# SSL configuration
...
# Add index.php to the list if you are using PHP
index index.html index.htm;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.html;
#try_files $uri $uri/ =404;
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://backends;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
# Requests for socket.io are passed on to Node on port 3000
location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://backends;
}
}
The reason for you to get a 504 is when nginx does HTTP health check it tries to connect to the location(ex: / for 200 status code) which you configured. Since the backend1 is powered down and the port is not listening and the socket is closed.
It will take some time to get timeout exception and hence the 504: gateway timeout.
It's a different case when you stop the application process.The port will not be listening and it will get connection refused which is identified pretty quick and marks the instance as unavailable.
To overcome this you can set fail_timeout=2s to mark the server as unavailable default is 10 seconds.
https://nginx.org/en/docs/http/ngx_http_upstream_module.html?&_ga=2.174685482.969425228.1595841929-1716500038.1594281802#fail_timeout

Nginx - How to run multiple instance of Odoo with different subdomain names

I'd like to run two instance of Odoo v10 on different links.
the 1st instance will include multiple databases for our testing purposes running on this link mydoamin.com
And for the second instance will be holding demo databases for our clients to demonstrate Odoo for them on this link clients.mydomain.com
Both instances should be running on the same server.
I did a lot of research to figure out how to achieve this approach, but I didn't find any guide can help me to do it by using Nginx reverse proxy.
Here's my Nginx configuration file:
upstream backend-odoo {
server 127.0.0.1:8069;
}
upstream backend-odoo-im {
server 127.0.0.1:8072;
}
server {
listen 80;
add_header Strict-Transport-Security max-age=2592000;
rewrite ^/.*$ https://example.com$request_uri? permanent;
}
server {
listen 443 default;
# ssl settings
ssl on;
ssl_certificate
/etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
keepalive_timeout 60;
#increase the upload file size limit
client_max_body_size 300M;
# proxy header and settings
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
# odoo log files
access_log /var/log/nginx/odoo-access.log;
error_log /var/log/nginx/odoo-error.log;
# increase proxy buffer size
proxy_buffers 16 64k;
proxy_buffer_size 128k;
# force timeouts if the backend dies
proxy_next_upstream error timeout invalid_header http_500
http_502 http_503;
# enable data compression
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/plain application/x-javascript text/xml text/css;
gzip_vary on;
location / {
proxy_pass http://backend-odoo;
}
location ~* /web/static/ {
# cache static data
proxy_cache_valid 200 60m;
proxy_buffering on;
expires 864000;
proxy_pass http://backend-odoo;
}
location /longpolling {
proxy_pass http://backend-odoo-im;
}
}
PS. I tried to set db filter = ^%d$ in odoo configuration file but it I get nothing.
Try dbfilter = %h$ that works for me better.
You have to rename your databases that it matches the URL's.
yourdomain.com get yourdomain_com as DB name.

Preserve response headers in nginx

I have a reverse-proxy setup(I think), for gunicorn running a falcon app. I was also able to setup SSL on the nginx server. The /etc/nginx/nginx.conf:
worker_processes 1;
user nobody nogroup;
pid /tmp/nginx.pid;
error_log /tmp/nginx.error.log;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # set to 'on' if nginx worker_processes > 1
}
http {
include mime.types;
# fallback in case we can't determine a type
default_type application/json;
access_log /tmp/nginx.access.log combined;
sendfile on;
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types application/json;
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
server 127.0.0.1:6789 fail_timeout=0;
}
server {
# if no Host match, close the connection to prevent host spoofing
listen 80 default_server;
return 444;
}
server {
listen 443 ssl;
client_max_body_size 4G;
# set the correct host(s) for your site
server_name 0.0.0.0;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
keepalive_timeout 2;
location / {
proxy_bind $server_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;
proxy_redirect off;
proxy_pass http://app_server;
}
}
}
What do I need to change so that the response headers from gunicorn are preserved? Also, I am completely new to this. So is there anything that I should change?

Resources