Nginx proxy_cache_lock in multiple locations - nginx

http {
...
server {
...
location /good {
proxy_cache mycache;
proxy_cache_key $arg_cachekey;
proxy_cache_valid 200 1h;
proxy_cache_lock on;
proxy_cache_lock_timeout 20m;
proxy_cache_lock_age 20m;
...
(upstream returning 200 with the content)
}
location /bad {
proxy_cache mycache;
proxy_cache_key $arg_cachekey;
proxy_cache_lock on;
proxy_cache_lock_timeout 20m;
proxy_cache_lock_age 20m;
...
(upstream returning 404)
}
}
}
The cache is empty. Requesting:
GET /good?cachekey=123
after a short time while /good upstream is responding with the content, requesting:
GET /bad?cachekey=123
Should request on /bad location wait until /good retrieves cache and /bad respond with 200? If no, how to achieve it?

Related

Nginx is not caching the path I setup to cache

proxy_cache_path /tmp/nginx_team_alert_cache keys_zone=team_alerts:10m levels=1:2 max_size=1g use_temp_path=off;
server{
...
location /api/timeentry/timeentry/team_alerts/ {
proxy_cache team_alerts;
proxy_ignore_headers Cache-Control Set-Cookie;
proxy_hide_header "Set-Cookie";
proxy_cache_valid 200 5s;
proxy_cache_key $scheme$host$request_method$request_uri;
proxy_buffering on;
add_header X-Cached $upstream_cache_status;
include uwsgi_params;
uwsgi_pass unix:/tmp/app.sock;
}
}
I have been searching on stackoverflow etc and added all the recommended options but still not caching.
I just realised I am uwsgi_pass so the proxy_* directives won't work, simply replace proxy_* to uwsgi_* worked

Different proxy_cache_valid depending on request_uri for nginx

I'm using nginx as a cache for googleapis.com. Currently all responses are cached for 5m:
proxy_cache_path /var/cache/nginx/xxx_cache keys_zone=xxx_cache:10m;
server {
location ~ /blog/ {
proxy_pass https://www.googleapis.com/blogger/v3/blogs/;
proxy_cache xxx_cache;
proxy_cache_lock on;
proxy_cache_valid 5m;
}
I'd like to change this interval depending on the request_uri. Defining a $proxy_cache_valid variable via the map directive, and using it for proxy_cache_valid fails with invalid time value "$proxy_cache_valid" in ....
map $request_uri $proxy_cache_valid {
default 5m;
~^/blog/[0-9]+/posts/[0-9]+ 1h;
}
proxy_cache_path /var/cache/nginx/xxx_cache keys_zone=xxx_cache:10m;
server {
location ~ /blog/ {
proxy_pass https://www.googleapis.com/blogger/v3/blogs/;
proxy_cache xxx_cache;
proxy_cache_lock on;
proxy_cache_valid $proxy_cache_valid;
}
How can I realise this in nginx (nginx version: nginx/1.16.1)?

Nginx reverse proxy on AWS Beanstalk and Cacheing

I have a reverse proxy set up in AWS Beanstalk. The purpose is for nginx to fetch a value from the upstream Location header and store the result with the cache key of the original request URI so that future clients don't need to follow the redirect too.
Disk space gets up to 68G here: /var/nginx/cache and 30G here: /var/nginx/temp-cache. So, my proxy server's disk space fills up pretty fast.
Anyone know how I can reduce or limit the size of my cache? Or if there is a more efficient better way of doing this so my disk doesn't fill up so fast? Thanks.
worker_processes 1;
user nginx;
pid /var/run/nginx.pid;
events {
worker_connections 65535;
}
worker_rlimit_nofile 30000;
http {
proxy_cache_path /var/nginx/cache keys_zone=rev-origin:100m levels=1:2 inactive=7d max_size=80g;
proxy_temp_path /var/nginx/temp-cache;
server {
listen 80 default_server;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
charset utf-8;
client_max_body_size 500M;
gzip on;
location / {
proxy_pass https://123456abc.execute-api.us-east-1.amazonaws.com/AB/;
proxy_ssl_server_name on;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
proxy_cache rev-origin;
proxy_cache_key $uri;
proxy_cache_valid 200 206 1d;
proxy_intercept_errors on;
recursive_error_pages on;
error_page 301 302 307 = #handle_redirects;
}
location #handle_redirects {
resolver 8.8.8.8;
set $original_uri $uri;
set $orig_loc $upstream_http_location;
# nginx goes to fetch the value from the upstream Location header
proxy_pass $orig_loc;
proxy_cache rev-origin;
proxy_intercept_errors on;
error_page 301 302 307 = #handle_redirect;
# But we store the result with the cache key of the original request URI
# so that future clients don't need to follow the redirect too
proxy_cache_key $original_uri;
proxy_cache_valid 200 206 3000h;
}
}
}
I found a solution by testing a few modules over the past few weeks. What seems to work and reduce the size of the cache is to use these modules:
proxy_max_temp_file_size 1024m;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;

nginx caching few upstreams on same server

I'm trying to test to build a nginx server to cache few servers.
My nginx conf is like that :
...
http {
upstream srv1 {
ip_hash;
server srv1.domain1.fr:443;
}
upstream srv2 {
ip_hash;
server srv2.domain2.fr:443;
}
...
proxy_cache_path /nginx/cache/cache_temp use_temp_path=off keys_zone=cache_temp:10m max_size=10g inactive=10m;
proxy_cache cache_temp;
...
#srv1
server {
listen 443 ssl http2;
server_name srv1.domain1.fr;
all ssl settings...
location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css|mp3|swf|ico|flv|woff|woff2|ttf|svg)$ {
proxy_cache_valid 12h;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
add_header X-Cache $upstream_cache_status;
proxy_pass https://srv1;
}
location / {
proxy_cache_valid 12h;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
add_header X-Cache $upstream_cache_status;
proxy_pass https://srv1;
}
}
#srv2
server {
listen 443 ssl http2;
server_name srv2.domain2.fr;
all ssl settings...
location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css|mp3|swf|ico|flv|woff|woff2|ttf|svg)$ {
proxy_cache_valid 12h;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
add_header X-Cache $upstream_cache_status;
proxy_pass https://srv2;
}
location / {
proxy_cache_valid 12h;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
add_header X-Cache $upstream_cache_status;
proxy_pass https://srv2;
}
}
so in my dns, I put the same IP for srv1.domain1.fr and srv2.domain2.fr
that works well but when I switch between both, issue occured : cache is the same
so I try to find a way to get separated cache
any idea ?
thanks
add more conf :
proxy_redirect off;
proxy_http_version 1.1;
proxy_read_timeout 10s;
proxy_send_timeout 10s;
proxy_connect_timeout 10s;
proxy_cache_path /nginx/cache/cache_temp use_temp_path=off keys_zone=cache_temp:10m max_size=10g inactive=10m;
proxy_cache cache_temp;
proxy_cache_methods GET HEAD;
proxy_cache_key $uri;
proxy_cache_valid 404 3s;
proxy_cache_lock on;
proxy_cache_lock_age 5s;
proxy_cache_lock_timeout 1h;
proxy_ignore_headers Cache-Control;
proxy_ignore_headers Set-Cookie;
proxy_cache_use_stale updating;
Change proxy_cache_key
From
proxy_cache_key $uri;
To
proxy_cache_key $scheme$proxy_host$request_uri;
Definitely it will solve this behavior.

Nginx "Proxy_Cache" Configuration Help Needed

I'm running in to a really weird issue. I only want to enable Proxy Cache for "new-site.com". However, when doing so, Nginx is proxy caching all of my websites.
I've went through all my vhost / config files and made sure that all "http" and "server" blocks were opened and closed correctly. It's my understanding that Proxy_Cache is only enabled for a site when you include (for example) "proxy_cache new-site;" in your websites "server" block.
In my "http" block, I load all of my websites .conf files, but none of them include any proxy_cache directives.
What am I doing wrong?
Here is a snippet of my config file :
http {
...
...
# nginx cache
proxy_cache_path /www/new-site.com/httpdocs/cache levels=1:2
keys_zone=new-site:10m
max_size=50m
inactive=1440m;
proxy_temp_path /www/new-site.com/httpdocs/cache/tmp 1 2;
# virtual hosting
include /etc/nginx/vhosts/*.conf;
}
Then here is my "new-site.com" vhost conf file:
server {
listen xxx.xxx.xxx.xxx:80;
server_name new-site.com;
root /www/new-site.com/httpdocs;
index index.php;
...
...
proxy_cache new-site;
location / {
try_files $uri #backend;
}
location ~* \.php {
include /usr/local/etc/nginx/proxypass.conf;
proxy_ignore_headers Expires Cache-Control;
proxy_cache_use_stale error timeout invalid_header updating http_502;
proxy_cache_bypass $cookie_session $http_secret_header;
proxy_no_cache $cookie_session;
add_header X-Cache $upstream_cache_status;
proxy_cache_valid 200 302 5m;
proxy_cache_valid 404 1m;
proxy_pass http://127.0.0.1:80;
}
location #backend {
include /usr/local/etc/nginx/proxypass.conf;
proxy_ignore_headers Expires Cache-Control;
proxy_cache_use_stale error timeout invalid_header updating http_502;
proxy_cache_bypass $cookie_session $http_secret_header;
proxy_no_cache $cookie_session;
add_header X-Cache $upstream_cache_status;
proxy_cache_valid 200 302 5m;
proxy_cache_valid 404 1m;
proxy_pass http://127.0.0.1:80;
}
location ~* \.(jpg|jpeg|gif|png|bmp|ico|pdf|flv|swf|css|js)$ {
....
}
}
Once I moved the line "proxy_cache new-site;" in to a "location" block, that resolved the issue for me.
Not sure why I have this issue when it sits outside a block though.

Resources