I've uploaded a file in the s3 bucket and also I've added CDN caching for load files. I want to update that with the same name but the problem is that I can't get updated file. CDN return always old file because it's cached in CDN for 1 year (max-age:31536000).
So, how can I remove it from CDN?
You can invalidate the file as described at https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html. Invalidation instructs CloudFront to ignore the copy in its caches and instead contact the origin server.
Related
I'm already long-term cache my static assets like: css, images, js files etc. Since those files all get content-hash ids in my build process, this is how I treat them cache-wise:
// STATIC FILES LIKE IMAGES, FONTS, CSS AND JS
Cache-Control: "public,max-age=31536000"
This way I get both client and CDN cache up to a year, which is great. It's working fine.
But my web app is a Single Page React App, so whenever I update it, the only index.html file my users got from my app becomes automatically stale and useless, because it points to the old static JS files, which have all been updated now.
So basically I cannot let them get a stale index.html no matter what.
I'd also like to get the benefits from the CDN cache for that file. And here is when it might get tricky.
Right now, to be on the safe side, here is what I'm doing:
// For index.html
Cache-Control: "no-cache, no-store, must-revalidate"
I was thinking of changing it to:
// FOR index.html
Cache-Control: "max-age=0, s-maxage=86400, must-revalidate"
This way I would get a CDN cache of 1 day, which would be nice. But I still don't wanna take the risk of serving a stale index.html.
Here is what Firebase Hosting says about it:
Any requested static content is automatically cached on the CDN. If you redeploy your site's content, Firebase Hosting automatically clears all your cached static content across the CDN until the next request.
But the problem is my server is hosted on Cloud Run. And Firebase Hosting basically rewrites every request to it. Something like:
firebase.json
"rewrites": [
{
"source": "**",
"run": {
"serviceId": "server",
"region": "uscentral-1"
}
}
]
So, whenever I update my app, I re-deploy it to Cloud Run, but I do not run a new firebase deploy --only hosting command. Because nothing in my firebase.json file changes in-between new deployments of the Cloud Run code.
QUESTION
Is it safe to add the s-maxage=86400 header in this situation?
Assuming a new deploy on Cloud Run will not trigger the purge of the CDN cache. Is there something I can do to trigger that? Like some firebase deploy --only hosting:clear-cdn command?
Because even if I run firebase deploy --only hosting again, I'm not really sure the cached files will be purged, because my Firebase Hosting /public folder is always an empty folder. So Firebase Hosting might "feel" that nothing has changed.
After a day of testing, here are the results:
If you set Cache-Control headers that will allow for shared (CDN) caching, like public or no-cache, your responses will be caches both on client browser and on CDN caching.
When you re-deploy to Cloud Run, will it clear your CDN cache automatically?
No. When you update and re-deploy your app files to Cloud Run, those cached files on CDN will be stale and will not be automatically cleared from the CDN. So even if nothing has changed as far as Firebase Hosting is concerned, you need to run firebase deploy --only hosting again. This will make the CDN clear all of your cached files and new requests will start getting fresh data right away.
I'm not really sure the cached files will be purged, because my Firebase Hosting /public folder is always an empty folder. So Firebase Hosting might "feel" that nothing has changed.
Even if nothing in your Firebase Hosting public folder has changed (in my case it's an empty folder) and nothing in your firebase.json has changed, it will still create a new Firebase Hosting release and it will clear your cached files from the CDN, like the doc says:
Any requested static content is automatically cached on the CDN. If you redeploy your site's content, Firebase Hosting automatically clears all your cached static content across the CDN until the next request.
Attention for dynamic content
If you have dynamic content that you'll edit through an admin UI, for example. Be aware that the CDN cache will keep stale cache of that content until it expires.
For example: CDN caches /blog/some-post with s-maxage of 1 day. Even if you change the content of your post dynamically, the CDN will keep the CDN for 1 full day, until it expires and it gets requested again.
I have a site with a large number of images, I'd like to host these on a remote cloud storage solution as we're getting close to our storage limit on the current server.
I can get a remote cloud storage service setup, what needs to be done on wordpress configuration to use this as the new folder for uploads?
Thanks
Specifically for AWS S3:
The company I work for use this: https://deliciousbrains.com/wp-offload-s3/ and it's worked a treat!
This should handle the automatic upload of your old media, plus updating your posts/pages. To be safe, download a local copy of your WP and database, and run all it locally using a test bucket. Or, have a back up on to hand if the upload doesn't work. Can't be too safe!
We've only had one issue with it this past week and it's when you upload a file but change the file extension afterwards, it never off-loaded that particular image to S3 and continue to load the old /wp-content/<year>/<month> version.
We are running into issue where our clients are served stale js, css files after code is deployed. We are using IIS as our webserver and our code is in ASP.Net 4.5. I did some research and figured out that ETag in conjunction with Cache-control should work. As I understand ETag is automatically generated by web server based on datetime stamp of file so I ran following steps to see why the system is not sending the latest version of js and css files.
Navigated to my website to a webpage let's call is demo.aspx.(Now assuming that demo.aspx contains reference to a.js, b.js and c.css)
Verified that a.js, b.js and c.css file were requested by browser and webserver delivered those files after I hard refersh a page(Ctrl + F5) on my website.
Clicked on some other webpage
Went to webserve and manually updated files (a.js, b.js and c.css to update datetime stamp of those files)
Navigated to demo.aspx again.
This time I see only request made to demo.aspx but not to any of the resource file (a.js, b.js and c.css).
I am at loss as to why .js files are not requested when I access my demo.aspx page.
Also, Is there any easy way to force client browsers to download latest version of .js and .css files every time I deploy code. Based on my research, I did find out that one way to do would be to rename .js and .css file. Please note that this soution won't work for us.
We do use update panel in our projects. Not sure if that has anything to do with browser not requesting js files second time
A widely used trick is to add a query string parameter that is incremented with every new version of the css or js file.
Like myScript.js?version=12. When the the number changes the browser sees it as a new file and it's downloaded rather than retrieved from cache.
Just changing the timestamp by editing the file won't work, the browser does not get the timestamp from the server. You can try this by saving an image or file from the website, they all have the timestamp of when they were being downloaded.
My website is loading same files with www and without www. I just deleted the file from file manager and its showing 404 which is fine but a different version of it is loading
https://ospreyhomes.ae/wp-content/uploads/2015/01/logo.png (Delete but loading over without www domain)
https://www.ospreyhomes.ae/wp-content/uploads/2015/01/logo.png (Deleted - 404)
How can i get rid of the https://ospreyhomes.ae/wp-content/uploads/2015/01/logo.png being accessed on internet. i have already deleted the file
Same things goes with these 2 files. File deleted but being accessed with url without www
https://ospreyhomes.ae/wp-content/uploads/2015/01/logo.png
https://www.ospreyhomes.ae/wp-content/uploads/2015/01/logo.png
Its almost happen to any file when you have cdn cache service enable like cloudflare. Make sure you clear your browser cache. Try F5. Even not working,Then try a proxy browser to see real-time status of file or disable cloudflare while editing files.
From my side,i see your file has been deleted.
How I can invalidate all accumulated cache by opening special url on cache server?
When using Nginx Plus I can use proxy_cache_purge directive for it. Can I do it without proxy_cache_purge? For invalidate specific cache file I tried to use https://github.com/FRiCKLE/ngx_cache_purge module, but I can not reset all cache.
P.S. cache have very much files (millions), so just delete cache folder this is the bad idea