nginx ETag invalidation - http

I'm caching static resources using the following code in my nginx.conf:
http {
...
gzip on;
gzip_types *;
gzip_vary on;
...
server {
...
location /static {
alias /opt/static_root;
expires max;
}
}
}
This is sufficient to set the following http headers:
$ curl -I example.com/static/css/bootstrap.min.css
Content-Length: 97874
Last-Modified: Mon, 21 Nov 2016 18:30:33 GMT
ETag: "58333d49-17e52"
Expires: Thu, 31 Dec 2037 23:55:55 GMT
Cache-Control: max-age=315360000
However, even though the Last-Modified date is later than the cached version the browser has, I'm still seeing the old version of the file (tested on Firefox 50.0 and Chrome 54.0.2840.98).
How can I invalidate the ETag so that whenever I deploy diffs to my static files, the browser understands to reload them?
I have tried nginx -s reload, to no avail.

ETags are used when a client makes a Conditional Request to revalidate an expired resource. But in your case the resource won't expire until 2037! The browser will continue to serve the resource from its cache until then without ever checking with the server. That's what you told it to with your Expires header.
Typically if you're going to do far-future expires like that you have to version the resource by changing the name. Or you can change the Expires to something shorter, in which case the ETags will be used when the client tries to revalidate.

Related

EmberJs served on nginx - Set Cache-Control header in request

My application is build using EmberJs and is served on Nginx. I am able to set Cache-Control header in response using add_header in nginx.conf
location ~* \.(css|eot|gif|jpe?g|js|png|svg|ttf|woff2?)$ {
expires max;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
But this alone doesn't work as request header is set to no-cache. So, browser always goes to backend and doesn't use its cache ever.
Response Headers
cache-control: max-age=315360000
cache-control: public, must-revalidate, proxy-revalidate
expires: Thu, 31 Dec 2037 23:55:55 GMT
pragma: public
Request Headers
Cache-Control: no-cache
Pragma: no-cache
I understand that Cahce-Control in request is causing the problem. But I am not able to find how to set this header with some other value. Does ember build or nginx conf support this?
This is the index.html generated by ember which has links for vendor js and css which are supposed to be reused using cache by browser.

How to cache Amazon S3 objects for some time using Nginx, Ubuntu 20.04?

I have Nginx Web Server running in my website, in the service I enabled browser caching for all static assets for limited time to reduce resources usage and improve performance
curl -I https://www.betafox.net/wp-content/uploads/2022/02/Dungeon-Reset-193x278-1-175x238.webp
HTTP/2 200
server: nginx/1.18.0 (Ubuntu)
date: Mon, 21 Mar 2022 17:16:31 GMT
content-type: image/webp
content-length: 13368
last-modified: Wed, 23 Feb 2022 10:14:46 GMT
etag: "62160916-3438"
expires: Thu, 31 Dec 2037 23:55:55 GMT
cache-control: max-age=315360000
accept-ranges: bytes
While some assets don't have much lifetime, they still significantly reduce the server's workloads. Here's my simple server caching config:
# Expires map
map $sent_http_content_type $expires {
default off;
text/html epoch;
text/css max;
application/javascript max;
~image/ max;
~font/ max;
}
...
server {
...
expires $expires;
...
}
...
However, caching only works for local files, I mean, files that are stored in my server's disk. I only want that the S3 objects (Images) that are accessed remotely through my website, for example the ones in https://www.betafox.net/webtoon/dungeon-reset/chapter-1/ are also cached like the other assets to reduce GET requests and its costs. I know they won't cache from outside access, but I think it'd partially solve the issue without having to rely on complex solutions...

nginx cache not working for CSS and JS scripts when checking using curl command

I'm trying to follow this guide: https://www.digitalocean.com/community/tutorials/how-to-implement-browser-caching-with-nginx-s-header-module-on-ubuntu-16-04
but every time I execute curl -I http://myjsfile.com/thejsfile.js it doesn't return the cache property
i.e this one:
Expires: Thu, 01 Jan 1970 00:00:01 GMT
Cache-Control: no-cache
this is what I have in my sites-available file. though there are 2 in there the default and our custom one for Certbot SSL certs. I did apply this to those 2 files.
# Expires map
map $sent_http_content_type $expires {
default off;
text/html epoch;
text/css max;
application/javascript max;
~image/ epoch;
}
So I'm not sure it's caching it and when I checked it using gtmetrix it still gets an F for browser caching.
I also tried this one: NGINX cache static files
and I have this in my nginx.conf inside http
server {
location ~* \.(?:ico|css|js)$ {
expires 30d;
add_header Vary Accept-Encoding;
access_log off;
}
}
but it still didn't work when I checked using the curl command.
so can someone enlighten me on what I'm doing wrong here or is this not the best approach to cache JS and CSS files?

Nginx $upstream_addr variable doesn't work in if condition

I'm running a reverse proxy using proxy_pass directive from ngx_http_proxy_module. I want to forbid access to certain backend IP address ranges (like 172.0.0.0/24). I've tried
if ($upstream_addr ~* "^172.*") {
return 403;
}
add_header X-mine "$upstream_addr";
both in server and location context but it doesn't work, i.e. Nginx still returns 200:
$ curl localhost -I
HTTP/1.1 200 OK
Server: nginx/1.17.0
Date: Thu, 13 Feb 2020 12:58:36 GMT
Content-Type: text/html
Content-Length: 612
Connection: keep-alive
Last-Modified: Tue, 24 Sep 2019 14:49:10 GMT
ETag: "5d8a2ce6-264"
Accept-Ranges: bytes
X-mine: 172.20.0.2:80
What am I missing? (Note that I added the content of $upstream_addr variable to X-mine header for debugging.)
My understanding is that the if directive is run before the upstream request is sent, while the $upstream_addr header is only set after the upstream request has completed. I have tried and failed to find definitive documentation that explains the precise process, but the nginx documentation seems to be missing a number of things that one might wish for.
See this answer, and also If is evil for a little more guidance. I'm not actually sure quite what you're trying to achieve so I can't offer any hope about whether or not it's possible.

Unable to serve static file using cloud front with the origin server as ec2 instance

I have a web application running on centos ec2 instance behind Nginx reverse proxy with SSL certification (Let's Encrypt).
I have a javascript file located at the URL for example https://example.com/static/src/js/allEnd.js
I used CloudFront to delivered the static file with the origin server as the HTTP ec2 instance (not using the s3 bucket.)
My origin server is mapped with the domain name https://example.com I have the following configuration I have made so far:
1. www.example.com is redirected to example.com in Nginx
2. The CloudFront URL is an alias with my custom domain ie cdn.example.com
3. SSL for example.com is done on Nginx whereas SSL for cdn,example.com is done on AWS.
What I have understood so far is the first time the CloudFront will serve the static content by getting the file from my ec2 server and then the next time it will serve from CloudFront but every time the CloudFront redirect to the origin server to get the static file which CloudFront is not serving in my case.
Here is the header for both the origin server and CloudFront server.
1. Origin server (https://example.com)
get https://example.com/static/src/js/allEnd.js
HTTP/2 200
server: nginx/1.12.2
date: Sun, 12 May 2019 12:27:50 GMT
content-type: application/javascript
content-length: 168435
etag: "wzsdm-1557567525-168435-283837276"
cache-control: max-age=604800, public
expires: Sun, 19 May 2019 12:27:50 GMT
strict-transport-security: max-age=15768000; includeSubdomains; preload
x-frame-options: SAMEORIGIN
x-content-type-options: nosniff
strict-transport-security: max-age=15768000
2. CloudFront with origin as https://example.com (https://cdn.example.com)
get https://cdn.example.com/static/src/js/allEnd.js
HTTP/2 301
content-type: text/html
content-length: 185
location: https://example.com/static/src/js/allEnd.js
server: nginx/1.12.2
date: Sun, 12 May 2019 09:17:40 GMT
strict-transport-security: max-age=15768000; includeSubdomains; preload
x-frame-options: SAMEORIGIN
x-content-type-options: nosniff
strict-transport-security: max-age=15768000
age: 17
x-cache: Hit from cloudfront
via: 1.1 76d9b50884e58e2463b175e34e790838.cloudfront.net (CloudFront)
x-amz-cf-id: HBbfJXbFgKQz4eYlQSpLPQAk8pxZRKuGsb6YjFu8-AJL0JwPfs8FZw==
As you can see the response header that the cdn.example.com (CloudFront) redirect to the origin (example.com).
Also, I am confused with the content-type:text/html which should be content-type: application/javascript
What are the possibilities that I may have misconfigured?
If anything more you want to know, Please feel free to ask. Thanks.
P.S: I am new to the Nginx and AWS configuration and most importantly cache control.
You will need to examine the origin server configuration and log files to understand why this happened.
Check the origin server logs and you will find that the origin server originally generated that 301 redirect response -- not CloudFront.
Notice that the response headers from CloudFront include server: nginx/1.12.2. CloudFront did not add this.
The Content-Type is set as it is, because your origin server also returned HTML saying something like "object has moved" or "you are being redirected." Modern browsers typically don't display that message, they just follow the redirect.
In any event, if the origin server returns a redirect, CloudFront does not follow the redirect returned by the origin server -- it simply returns the redirect to the browser.
One possible explanation for the behavior you see is that you set the Origin Protocol Policy in CloudFront to "HTTP Only" instead of "HTTPS Only" or "Match Viewer," so the origin is trying to redirect the connection to use HTTPS since it sees the incoming connection (from CloudFront) as HTTP instead of HTTPS.
Another possibility is that you configured CloudFront's Cache Behavior settings to whitelist the Host header for forwarding to the origin.

Resources