NGINX failed to pass traffic to application - nginx

I have a nginx proxy in front of an application (listens 10.10.10.10:80) that a SSL certificate is terminated, but have an issue when trying to access the log-in page, as nginx redirects traffic to port 80 (which doesn't listen).
The NGINX configuration is shown below:
server {
listen 10.11.11.11:443 ssl;
server_name test.example.com;
access_log /var/log/nginx/test-access.log main;
error_log /var/log/nginx/test-error.log warn;
client_body_buffer_size 1M;
client_max_body_size 16M;
client_body_timeout 12;
client_header_timeout 12;
send_timeout 10;
ssl_certificate <PATH>/cert.crt;
ssl_certificate_key <PATH>/cert.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
location / {
proxy_pass http://10.10.10.10;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_ignore_headers Expires Cache-Control Set-Cookie;
proxy_pass_header Content-Type;
proxy_pass_header Content-Disposition;
proxy_pass_header Content-Length;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
proxy_max_temp_file_size 0;
proxy_force_ranges on;
}
}
what is needed in order NGINX redirects traffic always to 10.11.11.11:443 and apparently to 10.10.10.10:80?
PS If I manually enter the FQDN (https://test.example.com) to the failed request, then request becomes successful.
hope I explained it properly :)
thank you.

Sounds like you are testing using the IP Address (10.11.11.11) and your proxy_pass endpoint (10.10.10.10) is configured to only accept requests for specified FQDN (test.example.com) on HTTP (TCP port 80).
When it receives a request for a domain it does not recognize it redirects the user to what it believes should work http://test.example.com
You have a couple options to fix this
Update the upstream server to accept requests for additional host header values
Rewrite the 302 location header in the response to change the protocol from HTTP to HTTPS
Configure server block to listen on HTTP and have it redirect to HTTPS
Hard code the 'proxy_set_header Host' directive to test.example.com so it matches what the upstream expects (Not recommended because it could create unexpected results down the road when troubleshooting different issues)

Related

Nginx returns web requests with internal IP address

I am deploying InvenioRDM as local.
Here is a gist of the limitations.
InvenioRDM as local instance for prototyping
Application is strictly IP address and port bound
Aim is to link IP to URL in a seamless manner
The work so far:
InvenioRDM local instance exposes application frontend only
Approaches:
i) Mimic production: The Nginx configuration was initially setup that
mirrored the production. The production environment is purely
containers. Very complex so i decided to try a simpler approach.
ii) Transparent Proxy: Use Nginx to pass on everything and replace
the URLs at ingress (proxy_pass) and egress (proxy_redirect). The
benefit is to simplify the web server configuration as the
application does handle http requests.
My default.conf is as follows.
# HTTP server
server {
# Redirects all requests to https. - this is in addition to HAProxy which
# already redirects http to https. This redirect is needed in case you access
# the server directly (e.g. useful for debugging).
listen 80; # IPv4
server_name server.name;
return 301 https://$host$request_uri;
}
#HTTPS Server
server {
listen 443 ssl;
server_name server.name;
charset utf-8;
keepalive_timeout 5;
ssl_certificate /etc/ssl/test.crt;
ssl_certificate_key /etc/ssl/test.key;
ssl_session_cache builtin:1000 shared:SSL:50m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AE$
#ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
proxy_request_buffering off;
proxy_http_version 1.1;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass https://127.0.0.1:5000;
proxy_read_timeout 90;
proxy_redirect https://127.0.0.1:5000 https://server.name;
}
}
My issue is that when accessing publicly on the IP address server.name (hidden for obvious reasons), it returns with the internal Class A IP address (10.X.X.X) of the machine which is offcourse not accessible publicaly. What am I missing here.
I am new to this, and I am at my wits end.

Nginx 301 when accessing a proxied domain locally

I have a Nginx running as a reverse proxy for a domain, let's call it "testdomain.com", the proxy itself is working, and I can access this website from almost anywhere I want, except locally.
To clarify it better, here's my architecture:
I have a ESXi server which has a pfsense VM, the pfsense VM port forwards all requests destined to port 80 to the port 80 of another VM. That VM has a docker container which is running nginx, so it sends to port 80 of the container, and then it proxy pass the HTTP request to another external server where tha application (WordPress) is hosted. As I said it earlier, it works fine, however, if execute a curl locally (i.e wihitn my first my first VM or nginx container) to my address it returns the following:
curl testdomain.com
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx</center>
</body>
</html>
It seems that Nginx can't find the vhost, here's how my .conf for the website looks:
server {
listen 80;
modsecurity on;
modsecurity_rules_file /etc/nginx/modsec/main.conf;
server_name testdomain.com;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
add_header Cache-Control public;
add_header Pragma public;
add_header Vary Accept-Encoding;
expires 60M;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://myexternalserver.com:80;
}
}
server {
modsecurity on;
modsecurity_rules_file /etc/nginx/modsec/main.conf;
listen 443 ssl http2;
server_name testdomain.com;
access_log /var/log/nginx/access.log;
ssl_certificate /etc/nginx/ssl/nginx-selfsigned.crt;
ssl_certificate_key /etc/nginx/ssl/nginx-selfsigned.key;
#
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
#
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
#
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
#
add_header Strict-Transport-Security "max-age=31536000" always;
#
ssl_session_cache shared:SSL:40m;
ssl_session_timeout 4h;
ssl_session_tickets on;
location / {
add_header Cache-Control public;
add_header Pragma public;
add_header Vary Accept-Encoding;
expires 60M;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://myexternalserver.com:443;
}
}
I apologize if I miss any relevant info.
Thank you!

Using both Varnish and Nginx cache

Are there any performance benefits or performance degradation in using both varnish and nginx proxy cache together? I have a magento 2 site running with nginx cache, redis for session storage and backend cache and varnish in front. All on same centos machine. Any inputs or advice please? Below currently used nginx configuration file.
# Server globals
user nginx;
worker_processes auto;
worker_rlimit_nofile 65535;
pid /var/run/nginx.pid;
# Worker config
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
# Main settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_header_timeout 1m;
client_body_timeout 1m;
client_header_buffer_size 2k;
client_body_buffer_size 256k;
client_max_body_size 256m;
large_client_header_buffers 4 8k;
send_timeout 30;
keepalive_timeout 60 60;
reset_timedout_connection on;
server_tokens off;
server_name_in_redirect off;
server_names_hash_max_size 512;
server_names_hash_bucket_size 512;
# Proxy settings
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Set-Cookie;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
# SSL PCI Compliance
ssl_session_cache shared:SSL:40m;
ssl_buffer_size 4k;
ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
# Error pages
error_page 403 /error/403.html;
error_page 404 /error/404.html;
error_page 502 503 504 /error/50x.html;
# Cache settings
proxy_cache_path /var/cache/nginx levels=2 keys_zone=cache:10m inactive=60m max_size=1024m;
proxy_cache_key "$host$request_uri $cookie_user";
proxy_temp_path /var/cache/nginx/temp;
proxy_ignore_headers Expires Cache-Control;
proxy_cache_use_stale error timeout invalid_header http_502;
proxy_cache_valid any 1d;
# Cache bypass
map $http_cookie $no_cache {
default 0;
~SESS 1;
~wordpress_logged_in 1;
}
# File cache settings
open_file_cache max=10000 inactive=30s;
open_file_cache_valid 60s;
open_file_cache_min_uses 2;
open_file_cache_errors off;
# Wildcard include
include /etc/nginx/conf.d/*.conf;
}
It would be simply undesirable.
Magento+Varnish work together tightly connected. The key to efficient caching is having your app (Magento) being able to invalidate a specific page's cache when content for it has been changed.
E.g. you updated a price for a product - Magento talks to Varnish and sends a purge request for specific cache tags, which include product ID.
There is simply no such thing/integration between Magento and NGINX, so you risk, at minimum, having:
stale pages / old product data being displayed
users seeing an account of each other (as long as you keep your config above), unless you configure nginx cache to bypass on Magento specific cookies
The only benefit of having cache in NGINX (TLS side) is saving on absolutely
neglible proxy buffering overhead. It's definitely not worth the trouble, so you should be using only cache in Varnish.

How to run odoo in https mode using nginx?

I am trying to run odoo in https mode using nginx but its not working. This is how I tried,
sudo apt-get install nginx
cd /etc/nginx/sites-available
sudo openssl genrsa -des3 -passout pass:odoo -out server.temp.key 2048
sudo openssl req -new -passin pass:odoo -key server.temp.key -out server.csr
sudo openssl rsa -in server.temp.key -out server.key
sudo rm server.temp.key
sudo openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
my certificate file,
upstream odoo {
server localhost:8069 weight=1 fail_timeout=3000s;
}
server {
listen 443;
listen [::]:443 ipv6only=on;
server_name odoo.example.com;
ssl on;
ssl_ciphers ALL:!ADH:!MD5:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
# Specifies the maximum accepted body size of a client request,
# as indicated by the request header Content-Length.
client_max_body_size 200m;
# add ssl specific settings
keepalive_timeout 60;
# increase proxy buffer to handle some OpenERP web requests
proxy_buffers 16 64k;
proxy_buffer_size 128k;
location / {
proxy_pass http://odoo;
# Force timeouts if the backend dies
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
# Set headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
# Let the Odoo web service know that we're using HTTPS, otherwise
# it will generate URL using http:// and not https://
proxy_set_header X-Forwarded-Proto https;
# Set timeouts
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
send_timeout 3600;
# By default, do not forward anything
proxy_redirect off;
}
# Cache some static data in memory for 60mins.
# under heavy load this should relieve stress on the Odoo web interface a bit.
location ~* /[0-9a-zA-Z_]*/static/ {
proxy_cache_valid 200 60m;
proxy_buffering on;
expires 864000;
proxy_pass http://odoo;
}
access_log /var/log/nginx/odoo-ssl.access.log;
error_log /var/log/nginx/odoo-ssl.error.log;
}
After this I restarted nginx,enabled proxy mode in odoo config and restarted odoo server, but still my site runs in http mode. I have not given any domain name to my site. Is that compulsory before setting up nginx?
Ok, let's start from the beginning. In order to have set Odoo with ssl you need:
1) domain name
2) proper config for reverse proxy(you are using nginx so it will be easy fix)
3) ssl certificate
4) updated Odoo config
I have wrote down some hints to the above points
1) I assume that you have a domain pointing to your server. If not then you need to visit your domain control panel and set dns(simply put your server IP in "A" value). Sample tutorial on this(see point 5):
https://www.cier.tech/blog/blog-1/post/how-to-publish-your-website-on-amazon-ec2-linux-ubuntu-server-13
2) Sample Odoo config:
upstream odoo {
server 127.0.0.1:8069;
}
upstream odoochat {
server 127.0.0.1:8072;
}
# http -> https
server {
listen 80;
server_name odoo.mycompany.com; #replace with your domain
rewrite ^(.*) https://$host$1 permanent;
}
server {
listen 443;
server_name odoo.mycompany.com; #replace with your domain
proxy_read_timeout 720s;
proxy_connect_timeout 720s;
proxy_send_timeout 720s;
# Add Headers for odoo proxy mode
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
# SSL parameters - update with your cert details
ssl on;
ssl_certificate /etc/ssl/nginx/server.crt;
ssl_certificate_key /etc/ssl/nginx/server.key;
ssl_session_timeout 30m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
# log
access_log /var/log/nginx/odoo.access.log;
error_log /var/log/nginx/odoo.error.log;
# Redirect requests to odoo backend server
location / {
proxy_redirect off;
proxy_pass http://odoo;
}
location /longpolling {
proxy_pass http://odoochat;
}
# common gzip
gzip_types text/css text/less text/plain text/xml application/xml application/json application/javascript;
gzip on;
}
As you can see there is also upstream for the chat as it works on the other port.
Remember to create a shortcut in the sites-enabled:
ln -s /etc/nginx/sites-available/yoursite.com /etc/nginx/sites-enabled/yoursite.com
Later on test nginx config and restart it:
nginx -t
service nginx restart
Mentioned config comes from:
https://www.odoo.com/documentation/10.0/setup/deploy.html
4) Update your Odoo config with:
- proxy_mode = True
- workers = you need to have more than one worker if you want the "chat" and "discuss" modules to work properly.

nginx http to https redirect configuration not working

I have configured my nginx based on the documentation provided and articles available from web. It's not completely working specifically http to https.
I tried different changes but still not be able to execute successfully...Please have a look.
Few imp points : My . nodejs app is running on port 3000.
Ghost blog running on 2368.
HTTP — redirect all traffic to HTTPS
server {
listen 80;
server_name domainname.com www.domainname.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name www.domainname.com;
error_page 497 https://www.domainname.com$request_uri;
ssl_certificate /etc/letsencrypt/live/domainname.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domainname.com/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
# intermediate configuration. tweak to your needs.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers KEY_HERE;
ssl_prefer_server_ciphers on;
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
add_header Strict-Transport-Security max-age=15768000;
# OCSP Stapling ---
# fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;
## verify chain of trust of OCSP response using Root CA and Intermediate certs
# ssl_trusted_certificate /etc/ssl/certs/dhparam.pem;
resolver 8.8.8.8;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /blog {
proxy_pass http://localhost:2368;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
This issue is resolved.
Everything is correct with nginx configuration. Issue was with Google console platform. There is a check box in GCP config with name Allow HTTP traffic, which was unchecked by default. I made the change and it started working. Thanks for the reply.
I recommend you to do as below:
location / {
return 301 https://$host$request_uri;
}

Resources