Using both Varnish and Nginx cache - nginx

Are there any performance benefits or performance degradation in using both varnish and nginx proxy cache together? I have a magento 2 site running with nginx cache, redis for session storage and backend cache and varnish in front. All on same centos machine. Any inputs or advice please? Below currently used nginx configuration file.
# Server globals
user nginx;
worker_processes auto;
worker_rlimit_nofile 65535;
pid /var/run/nginx.pid;
# Worker config
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
# Main settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_header_timeout 1m;
client_body_timeout 1m;
client_header_buffer_size 2k;
client_body_buffer_size 256k;
client_max_body_size 256m;
large_client_header_buffers 4 8k;
send_timeout 30;
keepalive_timeout 60 60;
reset_timedout_connection on;
server_tokens off;
server_name_in_redirect off;
server_names_hash_max_size 512;
server_names_hash_bucket_size 512;
# Proxy settings
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Set-Cookie;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
# SSL PCI Compliance
ssl_session_cache shared:SSL:40m;
ssl_buffer_size 4k;
ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
# Error pages
error_page 403 /error/403.html;
error_page 404 /error/404.html;
error_page 502 503 504 /error/50x.html;
# Cache settings
proxy_cache_path /var/cache/nginx levels=2 keys_zone=cache:10m inactive=60m max_size=1024m;
proxy_cache_key "$host$request_uri $cookie_user";
proxy_temp_path /var/cache/nginx/temp;
proxy_ignore_headers Expires Cache-Control;
proxy_cache_use_stale error timeout invalid_header http_502;
proxy_cache_valid any 1d;
# Cache bypass
map $http_cookie $no_cache {
default 0;
~SESS 1;
~wordpress_logged_in 1;
}
# File cache settings
open_file_cache max=10000 inactive=30s;
open_file_cache_valid 60s;
open_file_cache_min_uses 2;
open_file_cache_errors off;
# Wildcard include
include /etc/nginx/conf.d/*.conf;
}

It would be simply undesirable.
Magento+Varnish work together tightly connected. The key to efficient caching is having your app (Magento) being able to invalidate a specific page's cache when content for it has been changed.
E.g. you updated a price for a product - Magento talks to Varnish and sends a purge request for specific cache tags, which include product ID.
There is simply no such thing/integration between Magento and NGINX, so you risk, at minimum, having:
stale pages / old product data being displayed
users seeing an account of each other (as long as you keep your config above), unless you configure nginx cache to bypass on Magento specific cookies
The only benefit of having cache in NGINX (TLS side) is saving on absolutely
neglible proxy buffering overhead. It's definitely not worth the trouble, so you should be using only cache in Varnish.

Related

nginx and blazor - upstream prematurely closed connection - 502 Bad Gateway

I am trying to deploy a blazor server template app on Nginx, but i'm stucked with this problem.
I tried everything that I could find online, but still the same error.
error.log
*36 upstream prematurely closed connection while reading response header from upstream, client:, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:7155/"
in case of that helps, browsers just show 502 code
this is my nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
and here the server block at /sites-enabled/
server {
listen 80;
listen [::]:80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /etc/nginx/cert.pem;
ssl_certificate_key /etc/nginx/cert.key;
location / {
proxy_pass http://dotnet;
proxy_set_header Host $host;
proxy_http_version 1.1; # you need to set this in order to use params below.
proxy_temp_file_write_size 64k;
proxy_connect_timeout 10080s;
proxy_send_timeout 10080;
proxy_read_timeout 10080;
proxy_buffer_size 64k;
proxy_buffers 16 32k;
proxy_busy_buffers_size 64k;
proxy_redirect off;
proxy_request_buffering off;
proxy_buffering off;
}
}
upstream dotnet {
zone dotnet 64k;
server 127.0.0.1:7155;
}
I don't know what I am doing wrong. please help
Based on this I made some notes on how to deploy on Nginx a Blazor Server App. I share and hope that helps.
Install nginx and start it:
sudo apt-get install nginx
sudo service nginx start
Now you need to configure it so that requests arriving to port 80 are passed to your app on port 5000. To do that, open the /etc/nginx/sites-available/default file in your favorite editor. The default configuration defines only one server, listening on port 80. Under this server, look for the section starting with location /: this is the configuration for the root path on this server. Replace it with the following configuration:
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
# try_files $uri $uri/ =404;
proxy_pass http://localhost:5000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
This should prevent connection reverting to long-polling.
Reload nginx:
sudo nginx -s reload
The default under /etc/nginx/sites-available/ looks like this:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
# try_files $uri $uri/ =404;
proxy_pass http://localhost:5000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
Miscrosoft reference on how to deploy.
I just solved the problem. The server block was redirecting to SSL, but when I called the upstream I was not doing it with HTTPS!
To solve the problem I just changed
proxy_pass http://dotnet;
to
proxy_pass https://dotnet;
and now everything works.

Nginx proxy + CDN

We are using a VM running a nginx proxy ( caching turned off ) to redirect hosts to the correct CDN url.
We are experiencing issues with the proxy showing old content that does not match what shows on the CDN. The CDN provider we use is Verizon (by through Azure - Microsoft CDN by Verizon).
When we do updates to the origin we automatically send purge requests to the CDN. These can be both manual and automatic updates dynamically and both single url purges and wilcard ones. What seems to be happening is when we get 2 pruge requests close in time. The first one goes through to the proxy, but the second one does not. Although both show correctly when accessing the CDN url directly.
To mention is that this issue only happens about 30% of the time.
nginx samle conf:
server {
resolver 8.8.8.8;
server_name <CUSTOM HOST>;
location / {
# Turn off all caching
proxy_no_cache 1;
proxy_cache_bypass 1;
proxy_redirect off;
expires -1;
proxy_cache off;
# Proxy settings
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
# Override cache headers to set 2min browser cache
proxy_hide_header Cache-Control;
add_header Cache-Control max-age=120;
proxy_pass "<CDN URL>request_uri";
}
}
nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile off;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Below is an example where the CDN is showing newer content then the proxy, we have a Last-Modifed mismatch:
CDN:
PROXY:
I have tried through a VPN to see if there is anything whit a particular POP that the proxy hits, but all POP:s show the correct content.
When the error is present, sending a curl request from the proxy to the CDN will result in the same incorrect headers.
After we perform a purge several requests goes through to origin directly until the CDN starts serving a Cached version again.
The we receive the first HIT about 1 min later.
I started to assume that this might have something internally to do with Azure & Verizon.
So I created an exact duplicate proxy hosted on amazon, but the error seem to precist.
Is there something else in the nginx that can cause this behavior?
Try adding just to see if the pages are retrieved each hit.
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;

NGINX failed to pass traffic to application

I have a nginx proxy in front of an application (listens 10.10.10.10:80) that a SSL certificate is terminated, but have an issue when trying to access the log-in page, as nginx redirects traffic to port 80 (which doesn't listen).
The NGINX configuration is shown below:
server {
listen 10.11.11.11:443 ssl;
server_name test.example.com;
access_log /var/log/nginx/test-access.log main;
error_log /var/log/nginx/test-error.log warn;
client_body_buffer_size 1M;
client_max_body_size 16M;
client_body_timeout 12;
client_header_timeout 12;
send_timeout 10;
ssl_certificate <PATH>/cert.crt;
ssl_certificate_key <PATH>/cert.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
location / {
proxy_pass http://10.10.10.10;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_ignore_headers Expires Cache-Control Set-Cookie;
proxy_pass_header Content-Type;
proxy_pass_header Content-Disposition;
proxy_pass_header Content-Length;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
proxy_max_temp_file_size 0;
proxy_force_ranges on;
}
}
what is needed in order NGINX redirects traffic always to 10.11.11.11:443 and apparently to 10.10.10.10:80?
PS If I manually enter the FQDN (https://test.example.com) to the failed request, then request becomes successful.
hope I explained it properly :)
thank you.
Sounds like you are testing using the IP Address (10.11.11.11) and your proxy_pass endpoint (10.10.10.10) is configured to only accept requests for specified FQDN (test.example.com) on HTTP (TCP port 80).
When it receives a request for a domain it does not recognize it redirects the user to what it believes should work http://test.example.com
You have a couple options to fix this
Update the upstream server to accept requests for additional host header values
Rewrite the 302 location header in the response to change the protocol from HTTP to HTTPS
Configure server block to listen on HTTP and have it redirect to HTTPS
Hard code the 'proxy_set_header Host' directive to test.example.com so it matches what the upstream expects (Not recommended because it could create unexpected results down the road when troubleshooting different issues)

nginx + gunicorn + pyramid: Get actual domain name from headers

I just setup nginx + gunicorn to serve up a Pyramid web application. My application relies on getting the subdomain, which is different for every client. I didn't know this before, but when going through gunicorn, it seems that the only thing I can get from domain is what i have in my INI file used to configure Gunicorn - localhost.
I'm wondering if there is a way to get the actual, full domain name where the request originated? It can't be anything hard coded since the subdomains could be different for each request. Does anyone have any ideas how to make this happen?
UPDATE
I made the change requested by Fuero, changing the value I had for
proxy_set_header Host $http_host;
to
proxy_set_header Host $host;
Unfortunately, that didn't do it. I'm still seeing 127.0.0.1:6500 in the environment as the remote address, host, etc. The only thing that shows me the actual client request domain is the referrer. I'm including my config file below hoping something stills stand out.
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
# sendfile on;
gzip on;
gzip_http_version 1.1;
gzip_vary on;
gzip_comp_level 6;
gzip_proxied any;
gzip_types text/plain text/html text/css
application/json application/x-javascript
text/xml application/xml application/xml+rss
text/javascript application/javascript text/x-js;
gzip_buffers 16 8k;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
upstream myapp-site {
server 127.0.0.1:6500;
}
server {
access_log /var/www/tmsenv/logs/access.log;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 60s;
proxy_send_timeout 90s;
proxy_read_timeout 90s;
proxy_buffering off;
proxy_temp_file_write_size 64k;
proxy_pass http://myapp-site;
proxy_redirect off;
}
}
}
Since you're using WSGI, you can find the hostname in environ['HTTP_HOST'] See PEP 333 for more details and other information you can retrieve.
Add this to your config:
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
This preserves the server name that was entered in the browser in the Host: header and adds the client's IP address in X-Forwarded-For:.
Access them via environ['HTTP_HOST'] and environ['HTTP_X_FORWARDED_FOR']. WSGI might be smart enough to respect X-Forwarded-For: though when setting REMOTE_IP.
I finally got this working and the fix was something stupid, as typical. While digging for answers I noticed my config didn't have the root or static directory location for which I'm using nginx in the first place. I asked the person who set this up and they pointed out that it was in another config file, which is consumed via the include.
include /etc/nginx/sites-enabled/*;
I went to that file and added the suggested headers and it works.
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;

504 Gateway Time-out with Nginx / GlassFish

Many answers on SO mention FastCGI params to prevent timeout. I tried to follow these advice (see the fastcgi params below), but it does not prevent the timeout.
I use Nginx to redirect to a glassfish app on port 8080. My nginx.conf:
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 360;
types_hash_max_size 2048;
# server_tokens off;
...
}
And my site.conf:
server {
listen 80;
server_name server.net www.server.net;
location /Server-1.0-SNAPSHOT/ {
proxy_pass http://localhost:8080/Server-1.0-SNAPSHOT/;
proxy_set_header X-Real-IP $remote_addr;
fastcgi_read_timeout 360;
}
}
I am quite amateur on server config so any detailed how-to would be appreciated!
fastcgi_read_timeout is for fastcgi_pass. As you use proxy_pass you need proxy_read_timeout.

Resources