CDN support/configuration for serving "stale" content, refreshing in background - cdn

Goal
Always serve content from a CDN EDGE cache, regardless of how stale. Refresh it in the background when possible.
Problem
I have a NextJS app that renders some React components server-side and delivers them to the client. For this discussion, let's just consider my homepage, which is unauthenticated and the same for everyone.
What I'd like is for the server rendered homepage to be cached at a CDN's EDGE nodes and served to end clients from that cache as often as possible, or always.
From what I've read, CDNs (like Fastly) which properly support cache related header settings like Surrogate-Control and Cache-Control: stale-while-revalidate should be able to do this, but in practice, I'm not seeing this working like I'd expect. I'm seeing either:
requests miss the cache and return to the origin when a prior request should have warmed it
requests are served from cache, but never get updated when the origin publishes new content
Example
Consider the following timeline:
[T0] - Visitor1 requests www.mysite.com - The CDN cache is completely cold, so the request must go back to my origin (AWS Lambda) and recompute the homepage. A response is returned with the headers Surrogate-Control: max-age=100 and Cache-Control: public, no-store, must-revalidate.
Visitor1 then is served the homepage, but they had to wait a whopping 5 seconds! YUCK! May no other visitor ever have to suffer the same fate.
[T50] - Visitor2 requests www.mysite.com - The CDN cache contains my document and returns it to the visitor immediately. They only had to wait 40ms! Awesome. In the background, the CDN refetches the latest version of the homepage from my origin. Turns out it hasn't changed.
[T80] - www.mysite.com publishes new content to the homepage, making any cached content truly stale. V2 of the site is now live!
[T110] - Visitor1 returns to www.mysite.com - From the CDNs perspective, it's only been 60s since Visitor2's request, which means the background refresh initiated by Visitor2 should have resulted in a <100s stale copy of the homepage in the cache (albeit V1, not V2, of the homepage). Visitor1 is served the 60s stale V1 homepage from cache. A much better experience for Visitor1 this time!
This request initiates a background refresh of the stale content in the CDN cache, and the origin this time returns V2 of the website (which was published 30s ago).
[T160] - Visitor3 visits www.mysite.com - Despite being a new visitor, the CDN cache is now fresh from Visitor1's most recent trigger of a background refresh. Visitor3 is served a cached V2 homepage.
...
As long as at least 1 visitor comes to my site every 100s (because max-age=100), no visitor will ever suffer the wait time of a full roundtrip to my origin.
Questions
1. Is this a reasonable ask of a modern CDN? I can't imagine this is more taxing than always returning to the origin (no CDN cache), but I've struggled to find documentation from any CDN provider about the right way to do this. I'm working with Fastly now, but am willing to try any others as well (I tried Cloudflare first, but read that they don't support stale-while-revalidate)
2. What are the right headers to do this with? (assuming the CDN provider supports them)
I've played around with both Surrogate-Control: maxage=<X> and Cache-Control: public, s-maxage=<X>, stale-while-revalidate in Fastly and Cloudflare, but none seem to do this correctly (requests well within the maxage timeframe dont pickup changes on the origin until there is a cache miss).
3. If this isn't supported, are there API calls that could allow me to PUSH content updates to my CDN's cache layer, effectively saying "Hey I just published new content for this cache key. Here it is!"
I could use a Cloudflare worker to implement this kinda caching myself using their KV store, but I thought I'd do a little more research before implementing a code solution to a problem that seems to be pretty common.
Thanks in advance!

I've been deploying a similar application recently. I ended up running a customised nginx instance in front of the Next.js server.
Ignore cache headers from the upstream server.
I wanted to cache markup and JSON, but I didn't want to send Cache-Control headers to the client. You could tweak this config to use the values in Cache-Control from Next.js, and then drop that header before responding to the client if the MIME type is text/html or application/json.
Consider all responses valid for 10 minutes.
Remove cached responses after 30 days.
Use up to 800 MB for the cache.
After serving a stale response, attempt to fetch a new response from the upstream server.
This isn't perfect, but it handles the important stale-while-revalidate behaviour. You could run a CDN over this as well if you want the benefit of global propagation.
Warning: This hasn't been extensively tested. I'm not confident that all the behaviour around error pages and response codes is right.
# Available in NGINX Plus
# map $request_method $request_method_is_purge {
# PURGE 1;
# default 0;
# }
proxy_cache_path
/nginx/cache
inactive=30d
max_size=800m
keys_zone=cache_zone:10m;
server {
listen 80 default_server;
listen [::]:80 default_server;
# Basic
root /nginx;
index index.html;
try_files $uri $uri/ =404;
access_log off;
log_not_found off;
# Redirect server error pages to the static page /error.html
error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 500 501 502 503 504 505 /error.html;
# Catch error page route to prevent it being proxied.
location /error.html {}
location / {
# Let the backend server know the frontend hostname, client IP, and
# client–edge protocol.
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
# This header is a standardised replacement for the above two. This line
# naively ignores any `Forwarded` header passed from the client (which could
# be another proxy), and instead creates a new value equivalent to the two
# above.
proxy_set_header Forwarded "for=$remote_addr;proto=$scheme";
# Use HTTP 1.1, as 1.0 is default
proxy_http_version 1.1;
# Available in NGINX Plus
# proxy_cache_purge $request_method_is_purge;
# Enable stale-while-revalidate and stale-if-error caching
proxy_cache_background_update on;
proxy_cache cache_zone;
proxy_cache_lock on;
proxy_cache_lock_age 30s;
proxy_cache_lock_timeout 30s;
proxy_cache_use_stale
error
timeout
invalid_header
updating
http_500
http_502
http_503
http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control Vary;
proxy_cache_valid 10m;
# Prevent 502 error
proxy_buffers 8 32k;
proxy_buffer_size 64k;
proxy_read_timeout 3600;
proxy_pass "https://example.com";
}
}

Regarding question 3.
You can use new names for every single files where you did the changes
to prevent not wanted cache or use API cloudFlare for release the cache like you suggest.
It's possible, more information you can find here
https://api.cloudflare.com/#zone-purge-files-by-cache-tags-or-host

Related

How to handle Nginx internal request or not

I use nginx as proxy server and I just set proxy_intercept_errors on; and also error_page directive error_page 400 /400.html; location = /400.html { root /path/to/error; }.
So the backend server which is tomcat(servlet) sometimes sendError likes HttpServletResponse.sendError(404); , that request may come back to nginx and redirect to 400.html.
In this situation I need to handle the internal redirect to 400.
My problem is I use a lua script which is checking some stuff from all income request, so I need to tell my lua to skip check when internal request is come.
Is it possible to identify internal request?
The below ngx.req.is_internal() is the answer.
https://github.com/openresty/lua-nginx-module#ngxreqis_internal

Nginx cache inactive vs proxy_cache_valid

Nginx cache config:
proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g
inactive=60m use_temp_path=off;
server {
# ...
location / {
proxy_cache my_cache;
proxy_cache_valid 5m;
proxy_pass http://my_upstream;
}
}
inactive
inactive specifies how long an item can remain in the cache without being accessed. In this example, a file that has not been requested for 60 minutes is automatically deleted from the cache by the cache manager process, regardless of whether or not it has expired.
proxy_cache_valid
Sets caching time for different response codes.If only caching time is specified then only 200, 301, and 302 responses are cached.
Does proxy_cache_valid override inactive? 5m later does the cached file exist or not?
From this blog two quotes:
Turns out proxy_cache_valid instructs Nginx that the resource could be cached for 1y IF the resource doesn’t become inactive first. When you request a resource that has longer expiration but has become inactive due lack of requests, it causes a cache miss.
Conclusion
proxy_cache_path should have a higher inactive time than the Expiration time of the requests (proxy_cache_valid).
From official Nginx guide:
inactive specifies how long an item can remain in the cache without being accessed. In this example, a file that has not been requested for 60 minutes is automatically deleted from the cache by the cache manager process, regardless of whether or not it has expired. The default value is 10 minutes (10m). Inactive content differs from expired content. NGINX does not automatically delete content that has expired as defined by a cache control header (Cache-Control:max-age=120 for example). Expired (stale) content is deleted only when it has not been accessed for the time specified by inactive. When expired content is accessed, NGINX refreshes it from the origin server and resets the inactive timer.
So, the answers for your questions:
Does proxy_cache_valid override inactive? 5m later does the cached file exist or not?
No. They work in pair.
proxy_cache_valid makes cache expired in 5 mins.
If cache (does not matter expired or not) has not been accessed within 10 mins - Nginx removes it.
If expired cache has been accessed within 10 mins - NGINX refreshes it from the origin server and resets the inactive timer.
Also this answer can help to understand proxy_cache_valid and inactive better.

Download file at given url via nginx [duplicate]

I have a web application that wants to access files from a third party site without CORS enabled. The requests can be to an arbitrary domain with arbitrary parameters. I'm sending a request to my domain containing the target encoded as a GET parameter, i.e.
GET https://www.example.com/proxy/?url=http%3A%2F%2Fnginx.org%2Fen%2Fdocs%2Fhttp%2Fngx_http_proxy_module.html
Then in Nginx I do
location /proxy/ {
resolver 8.8.8.8;
set_unescape_uri $dst $arg_url;
proxy_pass $dst;
}
This works for single files, but the target server sometimes returns a Location header, which I want to intercept and modify for the client to retry.
Basically I would like to escape $sent_http_location, append it to https://www.example.com/proxy/?url= and pass that back to the browser to retry.
I've tried doing
set_escape_uri $tmp $sent_http_location;
proxy_redirect $sent_http_header /pass/?v=$tmp;
but this doesn't work. I've also tried saving the Location header, then ignoring the incoming header with
proxy_hide_header
and replacing it with my own
proxy_set_header
but ignoring causes me to lose the variable saving it.
How can I configure Nginx to accomplish this handling of redirects so that I could pass a encoded URL would be returned to the user when the proxied site redirects?
There are several problems with your unsuccessful approach:
proxy_set_header sets the header that goes to the upstream server, not to the client. So even if $sent_http_location hadn't been empty, your configuration couldn't possibly work as you wanted it to.
$sent_http_<header> variables point exactly to the same area of memory as the response headers that will be send to the client. So when proxy_hide_header takes effect, the specified header is being removed from the memory along with the value of the corresponding $sent_http_<header>.
set_escape_uri works at a very early stage of the request processing, way before proxy_pass is called and the Location header is returned from the upstream server. So it will always process the at that time empty variable $sent_http_location and the result also will always be the empty variable.
The last problem is the most serious. The only way to make set_escape_uri work after proxy_pass is to force Nginx to leave the current location and start the processing all over again. This can be done with the error_page trick:
location /proxy/ {
resolver 8.8.8.8;
set_unescape_uri $dst $arg_url;
proxy_pass $dst;
proxy_intercept_errors on;
error_page 301 = #rewrite_301;
}
location #rewrite_301 {
set_escape_uri $location $upstream_http_location;
return 301 /pass/?v=$location;
}
Note the use of $upstream_http_location instead of $sent_http_location. When Nginx leaves the context of the location, it assumes that the request will be proxied to another upstream, or processed in some other way, and so it clears the headers recieved from the last proxy_pass, to make room for new response headers.
Unlike $sent_http_<header> vairables, which represent response headers that will be send to the client, $upstream_http_<header> variables represent response headers that were recieved from the upstream. Because of that, they are only replaced with new values when the request is proxied to another upstream server. So, once set, these variables can be used at any moment, they will not be cleared.

nginx cache but immediately expire/revalidate using `Cache-Control: public, s-maxage=0`

I'd like to use an HTTP proxy (such as nginx) to cache large/expensive requests. These resources are identical for any authorized user, but their authentication/authorization needs to be checked by the backend on each request.
It sounds like something like Cache-Control: public, max-age=0 along with the nginx directive proxy_cache_revalidate on; is the way to do this. The proxy can cache the request, but every subsequent request needs to do a conditional GET to the backend to ensure it's authorized before returning the cached resource. The backend then sends a 403 if the user is unauthorized, a 304 if the user is authorized and the cached resource isn't stale, or a 200 with the new resource if it has expired.
In nginx if max-age=0 is set the request isn't cached at all. If max-age=1 is set then if I wait 1 second after the initial request then nginx does perform the conditional GET request, however before 1 second it serves it directly from cache, which is obviously very bad for a resource that needs to be authenticated.
Is there a way to get nginx to cache the request but immediately require revalidating?
Note this does work correctly in Apache. Here are examples for both nginx and Apache, the first 2 with max-age=5, the last 2 with max-age=0:
# Apache with `Cache-Control: public, max-age=5`
$ while true; do curl -v http://localhost:4001/ >/dev/null 2>&1 | grep X-Cache; sleep 1; done
< X-Cache: MISS from 172.x.x.x
< X-Cache: HIT from 172.x.x.x
< X-Cache: HIT from 172.x.x.x
< X-Cache: HIT from 172.x.x.x
< X-Cache: HIT from 172.x.x.x
< X-Cache: REVALIDATE from 172.x.x.x
< X-Cache: HIT from 172.x.x.x
# nginx with `Cache-Control: public, max-age=5`
$ while true; do curl -v http://localhost:4000/ >/dev/null 2>&1 | grep X-Cache; sleep 1; done
< X-Cached: MISS
< X-Cached: HIT
< X-Cached: HIT
< X-Cached: HIT
< X-Cached: HIT
< X-Cached: HIT
< X-Cached: REVALIDATED
< X-Cached: HIT
< X-Cached: HIT
# Apache with `Cache-Control: public, max-age=0`
# THIS IS WHAT I WANT
$ while true; do curl -v http://localhost:4001/ >/dev/null 2>&1 | grep X-Cache; sleep 1; done
< X-Cache: MISS from 172.x.x.x
< X-Cache: REVALIDATE from 172.x.x.x
< X-Cache: REVALIDATE from 172.x.x.x
< X-Cache: REVALIDATE from 172.x.x.x
< X-Cache: REVALIDATE from 172.x.x.x
< X-Cache: REVALIDATE from 172.x.x.x
# nginx with `Cache-Control: public, max-age=0`
$ while true; do curl -v http://localhost:4000/ >/dev/null 2>&1 | grep X-Cache; sleep 1; done
< X-Cached: MISS
< X-Cached: MISS
< X-Cached: MISS
< X-Cached: MISS
< X-Cached: MISS
< X-Cached: MISS
As you can see in the first 2 examples the requests are able to be cached by both Apache and nginx, and Apache correctly caches even max-age=0 requests, but nginx does not.
I would like to address the additional questions / concerns that have come up during the conversation since my original answer of simply using X-Accel-Redirect (and, if Apache-compatibility is desired, X-Sendfile, respectively).
The solution that you seek as "optimal" (without X-Accel-Redirect) is incorrect, for more than one reason:
All it takes is a request from an unauthenticated user for your cache to be wiped clean.
If every other request is from an unauthenticated user, you effectively simply have no cache at all whatsoever.
Anyone can make requests to the public URL of the resource to keep your cache wiped clean at all times.
If the files served are, in fact, static, then you're wasting extra memory, time, disc and vm/cache space for keeping more than one copy of each file.
If the content served is dynamic:
Is it the same constant cost to perform authentication as resource generation? Then what do you actually gain by caching it when revalidation is always required? A constant factor less than 2x? You might as well not bother with caching simply to tick a checkmark, as real-world improvement would be rather negligible.
Is it exponentially more expensive to generate the view than to perform authentication? Sounds like a good idea to cache the view, then, and serve it to tens of thousands of requests at peak time! But for that to happen successfully you better not have any unauthenticated users lurking around (as even a couple could cause significant and unpredictable expenses of having to regen the view).
What happens with the cache in various edge-case scenarios? What if the user is denied access, without the developer using appropriate code, and then that gets cached? What if the next administrator decides to tweak a setting or two, e.g., proxy_cache_use_stale? Suddenly, you have unauthenticated users receiving privy information. You're leaving all sorts of cache poisoning attack vectors around by needlessly joining together independent parts of your application.
I don't think it's technically correct to return Cache-Control: public, max-age=0 for a page that requires authentication. I believe the correct response might be must-revalidate or private in place of public.
The nginx "deficiency" on the lack of support for immediate revalidation w/ max-age=0 is by design (similarly to its lack of support for .htaccess).
As per the above points, it makes little sense to immediately require re-validation of a given resource, and it's simply an approach that doesn't scale, especially when you have a "ridiculous" amount of requests per second that must all be satisfied using minimal resources and under no uncertain terms.
If you require a web-server designed by a "committee", with backwards compatibility for every kitchen-sink application and every questionable part of any RFC, nginx is simply not the correct solution.
On the other hand, X-Accel-Redirect is really simple, foolproof and de-facto standard. It lets you separate content from access control in a very neat way. It's dead simple. It actually ensures that your content will be cached, instead of your cache be wiped out clean willy-nilly. It is the correct solution worth pursuing. Trying to avoid an "extra" request every 10K servings during the peek time, at the price of having only "one" request when no caching is needed in the first place, and effectively no cache when the 10K requests come by, is not the correct way to design scalable architectures.
I think your best bet would be to modify your backend with support of X-Accel-Redirect.
Its functionality is enabled by default, and is described in the documentation for proxy_ignore_headers:
“X-Accel-Redirect” performs an internal redirect to the specified URI;
You would then cache said internal resource, and automatically return it for any user that has been authenticated.
As the redirect has to be internal, there would not be any other way for it to be accessed (e.g., without an internal redirect of some sort), so, as per your requirements, unauthorised users won't be able to access it, but it could still be cached just as any other location.
If you are unable to modify the backend app as suggested or if the authentication is straightforward such as auth basic, an alternative approach would be to carry out the authentication in Nginx.
Implementing this auth process and defining the cache validity period would be all you would have to do and Nginx will take care of the rest as per the process flow below
Nginx Process Flow as Pseudo Code:
If (user = unauthorised) then
Nginx declines request;
else
if (cache = stale) then
Nginx gets resource from backend;
Nginx caches resource;
Nginx serves resource;
else
Nginx gets resource from cache;
Nginx serves resource;
end if
end if
Con is that depending on the auth type you have, you might need something like the Nginx Lua module to handle the logic.
EDIT
Seen the additional discussions and information given. Now, not fully knowing about how the backend app works but looking at the example config the user anki-code gave on GitHub which you commented on HERE, the config below will avoid the issue you raised of backend app's authentication/authorization checks not being run for previously cached resources.
I assume the backend app returns a HTTP 403 code for unauthenticated users. I also assume that you have the Nginx Lua module in place since the GitHub config relies on this although I do note that the part you tested does not need that module.
Config:
server {
listen 80;
listen [::]:80;
server_name 127.0.0.1;
location / {
proxy_pass http://127.0.0.1:3000; # Metabase here
}
location ~ /api/card((?!/42/|/41/)/[0-9]*/)query {
access_by_lua_block {
-- HEAD request to a location excluded from caching to authenticate
res = ngx.location.capture( "/api/card/42/query", { method = ngx.HTTP_HEAD } )
if res.status = 403 then
return ngx.exit(ngx.HTTP_FORBIDDEN)
else
ngx.exec("#metabase")
end if
}
}
location #metabase {
# cache all cards data without card 42 and card 41 (they have realtime data)
if ($http_referer !~ /dash/){
#cache only cards on dashboard
set $no_cache 1;
}
proxy_no_cache $no_cache;
proxy_cache_bypass $no_cache;
proxy_pass http://127.0.0.1:3000;
proxy_cache_methods POST;
proxy_cache_valid 8h;
proxy_ignore_headers Cache-Control Expires;
proxy_cache cache_all;
proxy_cache_key "$request_uri|$request_body";
proxy_buffers 8 32k;
proxy_buffer_size 64k;
add_header X-MBCache $upstream_cache_status;
}
location ~ /api/card/\d+ {
proxy_pass http://127.0.0.1:3000;
if ($request_method ~ PUT) {
# when the card was edited reset the cache for this card
access_by_lua 'os.execute("find /var/cache/nginx -type f -exec grep -q \\"".. ngx.var.request_uri .."/\\" {} \\\; -delete ")';
add_header X-MBCache REMOVED;
}
}
}
With this, I'll expect that the test with $ curl 'http://localhost:3001/api/card/1/query' will run as follows:
First Run (With Required Cookie)
Request Hits location ~ /api/card((?!/42/|/41/)/[0-9]*/)query
In Nginx Access Phase, a "HEAD" sub-request is issued to /api/card/42/query. This location is excluded from caching in the config given.
The backend app returns a non 403 etc response since the user is authenticated.
A sub-request is then issued to the #metabase named location block which handles the actual request and returns the content to the user.
Second Run (Without Required Cookie)
Request Hits location ~ /api/card((?!/42/|/41/)/[0-9]*/)query
In Nginx Access Phase, a "HEAD" sub-request is issued to the backend at /api/card/42/query.
The backend app returns 403 Forbidden response since the user is not authenticated
The user's client gets a 403 Forbidden response.
Instead of /api/card/42/query, if resource intensive, you may be able to create a simple card query that will simply be used to do the auth.
Seems a straightforward way to go about it. The backend stays as it is without messing about with it and you configure your caching details in Nginx.

Nginx proxy intercept redirect and pass custom redirect to client

I have a web application that wants to access files from a third party site without CORS enabled. The requests can be to an arbitrary domain with arbitrary parameters. I'm sending a request to my domain containing the target encoded as a GET parameter, i.e.
GET https://www.example.com/proxy/?url=http%3A%2F%2Fnginx.org%2Fen%2Fdocs%2Fhttp%2Fngx_http_proxy_module.html
Then in Nginx I do
location /proxy/ {
resolver 8.8.8.8;
set_unescape_uri $dst $arg_url;
proxy_pass $dst;
}
This works for single files, but the target server sometimes returns a Location header, which I want to intercept and modify for the client to retry.
Basically I would like to escape $sent_http_location, append it to https://www.example.com/proxy/?url= and pass that back to the browser to retry.
I've tried doing
set_escape_uri $tmp $sent_http_location;
proxy_redirect $sent_http_header /pass/?v=$tmp;
but this doesn't work. I've also tried saving the Location header, then ignoring the incoming header with
proxy_hide_header
and replacing it with my own
proxy_set_header
but ignoring causes me to lose the variable saving it.
How can I configure Nginx to accomplish this handling of redirects so that I could pass a encoded URL would be returned to the user when the proxied site redirects?
There are several problems with your unsuccessful approach:
proxy_set_header sets the header that goes to the upstream server, not to the client. So even if $sent_http_location hadn't been empty, your configuration couldn't possibly work as you wanted it to.
$sent_http_<header> variables point exactly to the same area of memory as the response headers that will be send to the client. So when proxy_hide_header takes effect, the specified header is being removed from the memory along with the value of the corresponding $sent_http_<header>.
set_escape_uri works at a very early stage of the request processing, way before proxy_pass is called and the Location header is returned from the upstream server. So it will always process the at that time empty variable $sent_http_location and the result also will always be the empty variable.
The last problem is the most serious. The only way to make set_escape_uri work after proxy_pass is to force Nginx to leave the current location and start the processing all over again. This can be done with the error_page trick:
location /proxy/ {
resolver 8.8.8.8;
set_unescape_uri $dst $arg_url;
proxy_pass $dst;
proxy_intercept_errors on;
error_page 301 = #rewrite_301;
}
location #rewrite_301 {
set_escape_uri $location $upstream_http_location;
return 301 /pass/?v=$location;
}
Note the use of $upstream_http_location instead of $sent_http_location. When Nginx leaves the context of the location, it assumes that the request will be proxied to another upstream, or processed in some other way, and so it clears the headers recieved from the last proxy_pass, to make room for new response headers.
Unlike $sent_http_<header> vairables, which represent response headers that will be send to the client, $upstream_http_<header> variables represent response headers that were recieved from the upstream. Because of that, they are only replaced with new values when the request is proxied to another upstream server. So, once set, these variables can be used at any moment, they will not be cleared.

Resources