We are using a VM running a nginx proxy ( caching turned off ) to redirect hosts to the correct CDN url.
We are experiencing issues with the proxy showing old content that does not match what shows on the CDN. The CDN provider we use is Verizon (by through Azure - Microsoft CDN by Verizon).
When we do updates to the origin we automatically send purge requests to the CDN. These can be both manual and automatic updates dynamically and both single url purges and wilcard ones. What seems to be happening is when we get 2 pruge requests close in time. The first one goes through to the proxy, but the second one does not. Although both show correctly when accessing the CDN url directly.
To mention is that this issue only happens about 30% of the time.
nginx samle conf:
server {
resolver 8.8.8.8;
server_name <CUSTOM HOST>;
location / {
# Turn off all caching
proxy_no_cache 1;
proxy_cache_bypass 1;
proxy_redirect off;
expires -1;
proxy_cache off;
# Proxy settings
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
# Override cache headers to set 2min browser cache
proxy_hide_header Cache-Control;
add_header Cache-Control max-age=120;
proxy_pass "<CDN URL>request_uri";
}
}
nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile off;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Below is an example where the CDN is showing newer content then the proxy, we have a Last-Modifed mismatch:
CDN:
PROXY:
I have tried through a VPN to see if there is anything whit a particular POP that the proxy hits, but all POP:s show the correct content.
When the error is present, sending a curl request from the proxy to the CDN will result in the same incorrect headers.
After we perform a purge several requests goes through to origin directly until the CDN starts serving a Cached version again.
The we receive the first HIT about 1 min later.
I started to assume that this might have something internally to do with Azure & Verizon.
So I created an exact duplicate proxy hosted on amazon, but the error seem to precist.
Is there something else in the nginx that can cause this behavior?
Try adding just to see if the pages are retrieved each hit.
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;
Related
I'm trying to create a basic api which does stuff, as an api does, however it is sitting behind both an Nginx instance and a Cloudflare layer for security, however every time I make a request all the headers go through find but the body of the request (application/json) seems to be getting removed.
I have tried logging it on the nginx instance and I just get '-' every request so I think it could be Cloudflare. I have tested locally and I am definitely able to receive the body as it is. I've looked through the req object and there is no body anywhere, all the auth headers are fine just the body.
EDIT (in response to AD7six): Sorry i'll clear my question up, i'm saying that both the access log is missing the body and that my code behind the proxy does not receive it. I'll attach the nginx config / log now.
On further inspection my nginx config is listening to port 80 however all the responses are going to https... I hope that makes sense.
NGINX Config
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
log_format postdata $request_body;
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
server {
listen 80;
listen [::]:80;
server_name dev.ru-pirie.com;
location / {
access_log /var/log/nginx/postdata.log postdata;
proxy_pass http://192.168.1.74:3000;
}
}
}
All the log contains is - per request
When requests are proxied via Cloudflare, by default they are modified with additional headers, for example CF-Connecting-IP that shows the IP of the original client that has sent the request (full list here).
There are other features that Cloudflare users can implement that may alter the request, but only when explicitly configured to do so: for example, someone could write a Cloudflare Worker that modifies arbitrarily the incoming request before forwarding it to the origin server. Other general HTTP request changes are possible using Cloudflare Rules.
Cloudflare would not alter the body of an incoming request before passing it to the origin, unless explicitly configured to do so for example with Workers.
I have on a production server an Angular app (using Universal for server-side rendering) running on Node Express localhost:4000, and I configured Nginx reverse proxy for the app. The production server is using HTTPS for its domain name.
Here is nginx config in /etc/config/sites-enabled:
location ~ (/landing/home|/landing/company|/landing/contact|/landing/support) {
proxy_pass http://127.0.0.1:4000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_cache_bypass $http_upgrade;
add_header X-Cache-Status $upstream_cache_status;
}
location / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
Here is nginx.conf
user ubuntu;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
server_names_hash_bucket_size 64;
server_name_in_redirect off;
# include /etc/nginx/mime.types;
# default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
In Chrome Dev Tools - Network, here is sample request & response for an image (SVG file). The image that nginx sent is older version, and has since updated (file name unchanged) on the Angular side. Please note that this file is just a sample, the issue I'm facing is not just this one file, but all static files (including css and js files).
request header
response header
To verify, I did curl on a client and on the server, to the same image file. Here are result of curl:
curl result from a client browser, result was from nginx
curl result on the server, comparing between curl to localhost:4000 and curl to the actual public url
We can see that in response from localhost:4000 it is the latest version of the image, where in response from the public url it is older version of the same image.
I checked in /etc/nginx, there is no chache folder in there. I thought about clearing nginx's cache, but I couldn't find it there.
I have tried adding many things in config, including:
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache';
if_modified_since off;
expires off;
etag off;
and
proxy_cache off;
And somehow even the X-Cache-Status doesn't show up in response header neither, but comparing the curl result from localhost and the public url, it is clear to me that it must be something to do with nginx.
Anyone have suggestion on what to do to make nginx sends response from the actual output of localhost:4000, instead of from cache? Thank you.
UPDATE #1:
Sorry I only included partial nginx conf. I have found the root cause: I actually have two Node Express running on the same domain, one is on port 4000 (Angular Universal) and the other is on port 5000 (non-Universal). I have updated the excerpt of nginx conf above to include the other location directive for the one on port 5000. Please see my answer below for further explanation of what I did wrong to cause the problem in this question.
I found out the root cause of the problem.
I actually have two Node Express running on the same server, same domain. One in on port 4000 (uses Angular Universal), and the other on port 5000 (non-Universal). I have edited my question to include the second location directive in nginx conf.
The way I had my nginx conf made it looked like the whole page came as response from localhost:4000, but some parts within the page (images, style sheets, etc) actually were response from localhost:5000, due to the url of the request did not match the pattern in nginx conf for localhost:4000. So localhost:5000 got to respond the the request, and the files that localhost:5000 had was older version (not all, but the one I tested with curl happened to be older version).
I only realized this situation when I disabled the second location directive in the nginx conf, effectively disabled localhost:5000 from responding to any request, and then I saw many 404 errors because of that.
To solve this problem, meaning to have both localhost:4000 and localhost:5000 active, and still getting the correct respond, I had to make some adjustments to the routings in my Angular code.
I have 3 docker containers in the same network:
Storage (golang) - it provides API for uploading video files.
Streamer (nginx) - it streams uploaded files
Reverse Proxy (let's call it just Proxy)
I have HTTPS protocol between User and Proxy.
Let's assume that there is a file with id=c14de868-3130-426a-a0cc-7ff6590e9a1f and User wants to see it. So User makes a request to https://stream.example.com/hls/master.m3u8?id=c14de868-3130-426a-a0cc-7ff6590e9a1f. Streamer knows video id (from query param), but it doesn't know the path to the video, so it makes a request to the storage and exchanges video id for the video path. Actually it does proxy_pass to http://docker-storage/getpath?id=c14de868-3130-426a-a0cc-7ff6590e9a1f.
docker-storage in an upstream server. And protocol is http, because
I have no SSL-connection between docker containers in local network.
After Streamer gets path to the file it starts streaming. But User's browser start throwing Mixed Content Type error, because the first request was throw HTTPS and after proxy_pass it became HTTP.
Here is nginx.conf file (it is a Streamer container):
worker_processes auto;
events {
use epoll;
}
http {
error_log stderr debug;
default_type application/octet-stream;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
vod_mode local;
vod_metadata_cache metadata_cache 16m;
vod_response_cache response_cache 512m;
vod_last_modified_types *;
vod_segment_duration 9000;
vod_align_segments_to_key_frames on;
vod_dash_fragment_file_name_prefix "segment";
vod_hls_segment_file_name_prefix "segment";
vod_manifest_segment_durations_mode accurate;
open_file_cache max=1000 inactive=5m;
open_file_cache_valid 2m;
open_file_cache_min_uses 1;
open_file_cache_errors on;
aio on;
upstream docker-storage {
# There is a docker container called storage on the same network
server storage:9000;
}
server {
listen 9000;
server_name localhost;
root /srv/static;
location = /exchange-id-to-path {
proxy_pass $auth_request_uri;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Original-URI $request_uri;
# I tried to experiment with this header
proxy_set_header X-Forwarded-Proto https;
set $filepath $upstream_http_the_file_path;
}
location /hls {
# I use auth_request module just to get the path from response header (The-File-Path)
set $auth_request_uri "http://docker-storage/getpath?id=$arg_id";
auth_request /exchange-id-to-path;
auth_request_set $filepath $upstream_http_the_file_path;
# Here I provide path to the file I want to stream
vod hls;
alias $filepath/$arg_id;
}
}
}
Here is a screenshot from browser console:
Here is the response from success (200) request:
#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1470038,RESOLUTION=1280x720,FRAME-RATE=25.000,CODECS="avc1.4d401f,mp4a.40.2"
http://stream.example.com/hls/index-v1-a1.m3u8
#EXT-X-I-FRAME-STREAM-INF:BANDWIDTH=171583,RESOLUTION=1280x720,CODECS="avc1.4d401f",URI="http://stream.example.com/hls/iframes-v1-a1.m3u8"
The question is how to save https protocol after proxy_pass to http?
p.s. I use Kaltura nginx-vod-module for streaming video files.
I think proxy_pass isn't the problem here. When the vod module returns the index path it uses an absolute URL with HTTP protocol. A relative URL should be enough since the index file and the chunks are under the same domain (if I understood it correctly).
Try setting vod_hls_absolute_index_urls off; (and vod_hls_absolute_master_urls off; as well), so your browser should send requests relative to stream.example.com domain using HTTPS.
Are there any performance benefits or performance degradation in using both varnish and nginx proxy cache together? I have a magento 2 site running with nginx cache, redis for session storage and backend cache and varnish in front. All on same centos machine. Any inputs or advice please? Below currently used nginx configuration file.
# Server globals
user nginx;
worker_processes auto;
worker_rlimit_nofile 65535;
pid /var/run/nginx.pid;
# Worker config
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
# Main settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_header_timeout 1m;
client_body_timeout 1m;
client_header_buffer_size 2k;
client_body_buffer_size 256k;
client_max_body_size 256m;
large_client_header_buffers 4 8k;
send_timeout 30;
keepalive_timeout 60 60;
reset_timedout_connection on;
server_tokens off;
server_name_in_redirect off;
server_names_hash_max_size 512;
server_names_hash_bucket_size 512;
# Proxy settings
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Set-Cookie;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
# SSL PCI Compliance
ssl_session_cache shared:SSL:40m;
ssl_buffer_size 4k;
ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
# Error pages
error_page 403 /error/403.html;
error_page 404 /error/404.html;
error_page 502 503 504 /error/50x.html;
# Cache settings
proxy_cache_path /var/cache/nginx levels=2 keys_zone=cache:10m inactive=60m max_size=1024m;
proxy_cache_key "$host$request_uri $cookie_user";
proxy_temp_path /var/cache/nginx/temp;
proxy_ignore_headers Expires Cache-Control;
proxy_cache_use_stale error timeout invalid_header http_502;
proxy_cache_valid any 1d;
# Cache bypass
map $http_cookie $no_cache {
default 0;
~SESS 1;
~wordpress_logged_in 1;
}
# File cache settings
open_file_cache max=10000 inactive=30s;
open_file_cache_valid 60s;
open_file_cache_min_uses 2;
open_file_cache_errors off;
# Wildcard include
include /etc/nginx/conf.d/*.conf;
}
It would be simply undesirable.
Magento+Varnish work together tightly connected. The key to efficient caching is having your app (Magento) being able to invalidate a specific page's cache when content for it has been changed.
E.g. you updated a price for a product - Magento talks to Varnish and sends a purge request for specific cache tags, which include product ID.
There is simply no such thing/integration between Magento and NGINX, so you risk, at minimum, having:
stale pages / old product data being displayed
users seeing an account of each other (as long as you keep your config above), unless you configure nginx cache to bypass on Magento specific cookies
The only benefit of having cache in NGINX (TLS side) is saving on absolutely
neglible proxy buffering overhead. It's definitely not worth the trouble, so you should be using only cache in Varnish.
We are trying to build HA Kubernetese cluster with 3 core nodes each of having full set of vital components: ETCD + APIServer + Scheduller + ControllerManager and external balancer. Since ETCD can make clusters by themselves, we are stack with making HA APIServers. What seemed an obvious task a couple of weeks ago now became a "no way disaster"...
We decided to use nginx as a balancer for 3 independent APIServers. All the rest parts of our cluster that communicate with APIServer (Kublets, Kube-Proxys, Schedulers, ControllerManagers..) are suppose to use balancer to access it. Everything went well before we started the "destructive" tests (as I call it) with some pods runing.
Here is the part of APIServer config that dials with HS:
.. --apiserver-count=3 --endpoint-reconciler-type=lease ..
Here is our nginx.conf:
user nginx;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
worker_processes auto;
events {
multi_accept on;
use epoll;
worker_connections 4096;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
gzip on;
underscores_in_headers on;
include /etc/nginx/conf.d/*.conf;
}
And apiservers.conf:
upstream apiserver_https {
least_conn;
server core1.sbcloud:6443; # max_fails=3 fail_timeout=3s;
server core2.sbcloud:6443; # max_fails=3 fail_timeout=3s;
server core3.sbcloud:6443; # max_fails=3 fail_timeout=3s;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 6443 ssl so_keepalive=1m:10s:3; # http2;
ssl_certificate "/etc/nginx/certs/server.crt";
ssl_certificate_key "/etc/nginx/certs/server.key";
expires -1;
proxy_cache off;
proxy_buffering off;
proxy_http_version 1.1;
proxy_connect_timeout 3s;
proxy_next_upstream error timeout invalid_header http_502; # non_idempotent # http_500 http_503 http_504;
#proxy_next_upstream_tries 3;
#proxy_next_upstream_timeout 3s;
proxy_send_timeout 30m;
proxy_read_timeout 30m;
reset_timedout_connection on;
location / {
proxy_pass https://apiserver_https;
add_header Cache-Control "no-cache";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header Authorization $http_authorization;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-SSL-CLIENT-CERT $ssl_client_cert;
}
}
What came out after some tests is that Kubernetes seem to use single long living connection instead of tradition open-close sessions. This is probably dew to SSL. So we have to increase proxy_send_timeout and proxy_read_timeout to ridiculous 30m (the default value for APIServer is 1800s). If this settings are under 10m, then all clients (like Scheduler and ControllerManager) will generate tons if INTERNAL_ERROR because of broken streams.
So, for the crash test I simply put one of APIServers down by gently switching it off. Then I restart another one so nginx sees that upstream went down and switch all current connections to the last one. A couple of seconds later restarted APIserver returns back and we have 2 APIServers working. Then, I put network down on the third APIServer by running 'systemctl stop network' on that server so it has no chances to inform Kubernetes or nginx that its going down.
Now, the cluster it totally broken! nginx seem to recognize that upstream went down, but it will not reset already exciting connections to the upstream that is dead. I can still see them with 'ss -tnp'. If I restart Kubernetes services, they will reconnect and continue to work, same if I restart nginx - new sockets will show in ss output.
This happens only if I make APIserver unavailable by putting network down (preventing it from closing existing connections to nginx and informing Kubernetes that it is switching off). If I just stop it - everything work as a charm. But this is not a real case. Server can go down without any warning - just instantly.
What we are doing wrong? Is there is a way to force nginx to drop all connections to the upstream that went down? Anything to try before we move to HAProxy or LVS and ruin a week of kicking nginx in our attempts to make it balance instead of breaking our not so HA cluster.