Lighthouse complains about gzip files - nginx

we are optimizing our new website and there is one problem we cant get rid of. We have an optimized caching policy that works well. Problem: Lighthouse complains about ttl times for gzips. If I add gzip or gz to the line defining the cache our site looks strange and is only partially loaded.
This is our caching.conf:
location ~* ^.+\.(ico|css|js|gif|jpe?g|png|woff|woff2|eot|svg|ttf|webp|og |mp4|mov|mp3|wmv|webm)$ {
expires 180d;
add_header Pragmapublic;
add_header Cache-Control "public, must-validate";
}
Output from Lighthouse:
Serve static assets with an efficient cache policy 5 resources found
URL Cache TTL Transfer Size
…compressed/merged-e98e980…-482487f….js.165….gzip(xxx.yyyyy.info) None 54 KiB
So, how can I set the TTL for gzip or gz?
Yours
Stefan

Related

How do I get CORS to work for images loaded from Cloudfront and server running NGINX?

I'm trying to figure out how to properly get CORS to work on all the images for our site - so that we can cache them using WorkBox for PWA that I'm building.
Our current setup is as follows -
I've my main site running on https://www.MyAwesomeSite.com and I've configured AWS Cloudfront to serve all our static assets (js,css and images) through URL https://data.MyAwesomeSite.com/.
My PWA is almost ready - except that the opaque responses (all images from Cloudfront) are blowing the cache size as expected. That is while the actual cache size is ~200KB - Chrome reports it to be around 300 - 400 MB.
While investigating the issue, I found out that Workbox may sometime Cache the Opaque responses which causes the Cache size issue.
After reading multiple posts and articles about CORS - I'm still not sure if I need to enable CORS on NGINX running on my server OR configure it on Cloudfront.
My First Approach:
I tried enabling CORS on my NGINX server by following the guide on:https://enable-cors.org/server_nginx.html . However, using the code as-is resulted into entire site showing 404 error.
In order to investigate this new issue, I found out that if blocks inside location are not reliable and are not recommended. I tried using maps function, but it did not work. My final approach to enabling CORS on my NGIX is this -
server {
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 http2 ssl; # managed by Certbot
root /path/to/my/files;
server_name www.MyAwesomeSite.com;
index index.php index.html index.htm index.nginx-debian.html;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'Authorization,Content-Type,Accept,Origin,User-Agent,DNT,Cache-Control,X-Mx-ReqToken';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
client_max_body_size 32M;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
# Rest of the configuration
While I'm not sure if this approach is correct - and whether it actually is the right way to do it; I do see the new headers in the regular GET requests for views being loaded from my server
My Second Approach
I then tried setting up CORS on my Cloudfront with the help of AWS Tech Support and it seems to be working as expected.
Then in order to tackle my original problem, I followed Google's recommendation to add crossorigin="anonymous" to all the image tags on my site.
But this leads to a new problem!
With crossorigin="anonymous added to all the images; I found out that Chrome would randomly vanish images stating No "Access-Control-Allow-Origin` headers were present on the response.
My main problems are as follows -
None of my images are being loaded via XHR requests. Do I still need CORS; and if I do - should I enable it on my NGINX or Cloudfront?
How do I ensure caching of image assets for my PWA without really blowing up the cache?
Am I missing out on anything important?
I'd really be thankful to anyone who helps me with this issue. I'm trying to figure this out since last 72 hours without any success.
The origin you're accessing from a different origin needs to enable cross-origin requests.
That is, your CloudFront config for data.myawesomesite.com needs to have the appropriate Access-Control-Allow-Origin headers, and related headers if necessary, to allow requests from sites loaded from www.myawesomesite.com if you expect to read that data in script.

Updating a website served with a large max-age

I have a SPA website (VueJS) that I've begun updating on a daily basis. When I was new to the entire process, I borrowed bits and pieces of my nginx configuration from multiple sources and ended up serving all the files in my website with Cache-Control: max-age=31536000.
After having users complain that they're unable to find my recent changes, I've inclined to think that it may be due to the browser caching everything till 2037 :(. This hypothesis is supported by the fact that following my advice of CTRL+F5 fixed their issue.
I have since updated the website different cache rules, but the browser doesn't seem to be hitting my server to fetch these newer rules.
map $sent_http_content_type $expires {
default off;
text/html off;
text/css off;
application/javascript off;
application/x-javascript off;
}
...
server {
...
location / {
add_header Cache-Control 'no-cache, must-revalidate, proxy-revalidate, max-age=0';
...
}
}
Is there any way to undo this? Do I have to pack up and move to another domain?
If you have had far future Cache-Control lifetime set for all pages, and have a solid user base who were visiting your site when it was effective... then the short answer is: YES.
There is no way to undo use of browser cache as it will not check for new cache policy before currenly cached assets (your pages also, in this case) will not expire..
But you can just account for the fact that people tend to change browsers, run OS optimizer (which clear caches), or have an email campaign for users you know to instruct them to clear browser caches.
Not a good situation any way you look at it.
The setting that seem to work for me is the following using map is the following and I don't need to setup no-cache header any where else.
server {
add_header X-Frame-Options SAMEORIGIN always;
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
expires $expires;
...
}
map $sent_http_content_type $expires {
default off;
"text/html" epoch;
"text/html; charset=utf-8" epoch;
"text/css" epoch;
"application/javascript" epoch;
"~image/" max;
}
This guide has help understand.
https://www.digitalocean.com/community/tutorials/how-to-implement-browser-caching-with-nginx-s-header-module-on-ubuntu-16-04

nginx Cross-Origin Resource issues

I've read almost every post on stackoverflow in regards to CORS and whatever I try does not work. Here is my setup:
Ubuntu (digital ocean)
nginx
cdn: cdn77.com (not amazon)
cloudflare
wordpress with wp fastest cache
Each time a new setting was done I've purged cloudflare and restarted nginx.
This is what I've tried:
.htaccess (doesn't work)
<IfModule mod_headers.c>
<FilesMatch "\.(ttf|ttc|otf|eot|woff|font.css|css)$">
Header set Access-Control-Allow-Origin "*"
Header set Access-Control-Allow-Headers "Cache-Control, Pragma, Origin, Authorization, Content-Type, X-Requested-With"
Header set Access-Control-Allow-Methods "GET, PUT, POST"
</FilesMatch>
</IfModule>
nginx (doesn't work)
add_header Access-Control-Allow-Headers "X-Requested-With";
add_header Access-Control-Allow-Methods "GET, HEAD, OPTIONS";
add_header Access-Control-Allow-Origin "*";
I am pulling my hair out trying to figure out why font awesome wont show its icons on my site which is on a different domain.
.htaccess files are apache only, it will never work for Nginx.
With nginx it should work if theses headers are added to the font HTTP response... but it seems you do not own the fonts and you take the fonts from another website. The CORS headers need to be set by this website, not yours. Check what are theses headers on the fonts, and check that your website is allowed to use the fonts from there (else you will have to download the fonts on your website and add the headers on an nginx location based on the font extension.
I guess you confuse CORS headers and CSP headers (**C**ontent **S**ecurity **P**olicy). Where you can give a list of allowed resources for your website.

NGINX Serve Precompressed index file without source

I have found an interesting problem.
I am trying to serve some gzipped files without the sources using NGINX's gzip_static module (I know the downsides to this). This means you can have gzipped files on the server that will be served with transfer-encoding: gzip. For example, if there's a file /foo.html.gz, a request for /foo.html will be served the compressed file with content-encoding: text/html.
While this usually works it turns out that when looking for index files in a directory the gzipped versions are not considered.
GET /index.html
200
GET /
403
I was wondering if anyone knows how to fix this. I tried setting index.html.gz as in index file but it is served as a gzip file rather then a gzip encoded html file.
This clearly won't work this way.
This is a part of the module source:
if (r->uri.data[r->uri.len - 1] == '/') {
return NGX_DECLINED;
}
So if the uri ends in slash, it does not even look for the gzipped version.
But, you probably could hack around using rewrite.
(This is a guess, I have not tested it)
rewrite ^(.*)/$ $1/index.html;
Edit: To make it work with autoindex (guess) you can try using this instead of rewrite:
location ~ /$ {
try_files ${uri}/index.html $uri;
}
It probably is better overall than using rewrite. But you need to try ...
You can prepare your precompressed files then serve it.
Below it's prepared by PHP and served without checking if the client supports gzip.
// PHP prepare the precompressed gzip file
file_put_contents('/var/www/static/gzip/script-name.js.gz', gzencode($s, 9));
// where $s is the string containing your file to pre-compress
// NginX serve the precompressed gzip file
location ~ "^/precompressed/(.+)\.js$" {
root /var/www;
expires 262144;
add_header Content-Encoding gzip;
default_type application/javascript;
try_files /static/gzip/$1.js.gz =404;
}
# Browser request a file - transfert 113,90 Kb (uncompressed size 358,68 Kb)
GET http://inc.ovh/precompressed/script-name.js
# Response from the server
Accept-Ranges bytes
Cache-Control max-age=262144
Connection keep-alive
Content-Encoding gzip
Content-Length 113540
Content-Type application/javascript; charset=utf-8
ETag "63f00fd5-1bb84"
Server NginX

Indefinitely caching a HTTP response via Nginx fails

I'm trying to tell nginx to cache some of my assets (js, css) forever, or at least for a very long time.
The idea is that once an asset bundle is compiled and published with an /assets/ URI prefix (e.g. /assets/foo-{fingerprint}.js) it stays there and doesn't ever need to change.
The internets told me I should write the following rule:
location ~ ^/assets/.*-([^.]+)\.(js|css)$ {
gzip_static on; # there's also a .gz of the asset
expires max;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
I would expect this would result in responses with HTTP code 304 "Not Modified", but what I get is a consistent HTTP 200 (OK) every time.
I have tried some other approaches, for instance:
a) explicitly setting modification time to a constant point in time in the past;
add_header Last-Modified "Thu, 01 Jan 1970 00:00:00 GMT";
b) switching to If-None-Match checks;
add_header ETag $1;
if_modified_since off;
However, the only thing that really worked as needed was this:
add_header Last-Modified "Thu, 01 Jan 2030 00:00:00 GMT";
if_modified_since before;
I'm lost. This is contrary to everything I thought was right. Please help.
You should change your internets, since they give you wrong advices.
Just remove all add_header lines from your location (as well as surplus brake):
location ~ ^/assets/.*-([^.]+)\.(js|css)$ {
gzip_static on; # there's also a .gz of the asset
expires max;
}
and read the docs from the true Internet: http://nginx.org/r/expires and https://www.rfc-editor.org/rfc/rfc2616
It seems part of my configuration. During my researching I realized that browser uses heuristic analysis to validate requests with ConditionalGet headers (E-Tag, Last-Modified). It makes a lot of sense for back-end responses, so you can handle that to save server resources.
But in terms of static files (js, css, images), you can tell browser to serve them straight away without any Conditional Get validation. It is helpful if you update file name if any change takes place.
This part of configuration makes it happen:
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";

Resources