According this doc we need to remove page-data.json from being cached.
Here's my location directive on nginx
location /page-data {
add_header Cache-Control "public, max-age=0, must-revalidate";
}
For some reason the header response is still max-age=3660000
How would I go about making sure that all the page-data.json from the path /page-data/ isn't cached.
Related
I need to add the below X-Frame options in the nginx CF.
add_header X-Frame-Options "SAMEORIGIN";
I am using Cloud Foundry Staticfile buildpacks.
If I edit nginx.conf directly, it is getting removed whenever I deploy my application. So it is recommended to added through Staticfile buildpack.
But I don't know the exact value for the x-frame options.
Reference: https://docs.cloudfoundry.org/buildpacks/staticfile/index.html
Correct, you do not want to edit the config file directly. Your change will disappear the next time your app is restarted, restaged, recreated, crashes or even the next time the platform moves the app due to maintenance.
If you follow the instructions for adding custom Nginx configuration, you can put the configuration into a snippet & the Nginx configuration generated by the Staticfile buildpack will automatically read your config snippet.
Hope that helps!
Nginx.conf file Enable X-Frame-Options
location ~* \.(?:html)$ {
try_files $uri =404;
add_header Cache-Control 'no-cache, no-store, must-revalidate, max-age=0';
add_header X-Frame-Options 'SAMEORIGIN';
add_header X-Frame-Options 'DENY';
add_header Content-Security-Policy "frame-ancestors 'self';";
expires 0;
}
I have a SPA website (VueJS) that I've begun updating on a daily basis. When I was new to the entire process, I borrowed bits and pieces of my nginx configuration from multiple sources and ended up serving all the files in my website with Cache-Control: max-age=31536000.
After having users complain that they're unable to find my recent changes, I've inclined to think that it may be due to the browser caching everything till 2037 :(. This hypothesis is supported by the fact that following my advice of CTRL+F5 fixed their issue.
I have since updated the website different cache rules, but the browser doesn't seem to be hitting my server to fetch these newer rules.
map $sent_http_content_type $expires {
default off;
text/html off;
text/css off;
application/javascript off;
application/x-javascript off;
}
...
server {
...
location / {
add_header Cache-Control 'no-cache, must-revalidate, proxy-revalidate, max-age=0';
...
}
}
Is there any way to undo this? Do I have to pack up and move to another domain?
If you have had far future Cache-Control lifetime set for all pages, and have a solid user base who were visiting your site when it was effective... then the short answer is: YES.
There is no way to undo use of browser cache as it will not check for new cache policy before currenly cached assets (your pages also, in this case) will not expire..
But you can just account for the fact that people tend to change browsers, run OS optimizer (which clear caches), or have an email campaign for users you know to instruct them to clear browser caches.
Not a good situation any way you look at it.
The setting that seem to work for me is the following using map is the following and I don't need to setup no-cache header any where else.
server {
add_header X-Frame-Options SAMEORIGIN always;
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
expires $expires;
...
}
map $sent_http_content_type $expires {
default off;
"text/html" epoch;
"text/html; charset=utf-8" epoch;
"text/css" epoch;
"application/javascript" epoch;
"~image/" max;
}
This guide has help understand.
https://www.digitalocean.com/community/tutorials/how-to-implement-browser-caching-with-nginx-s-header-module-on-ubuntu-16-04
I have found an interesting problem.
I am trying to serve some gzipped files without the sources using NGINX's gzip_static module (I know the downsides to this). This means you can have gzipped files on the server that will be served with transfer-encoding: gzip. For example, if there's a file /foo.html.gz, a request for /foo.html will be served the compressed file with content-encoding: text/html.
While this usually works it turns out that when looking for index files in a directory the gzipped versions are not considered.
GET /index.html
200
GET /
403
I was wondering if anyone knows how to fix this. I tried setting index.html.gz as in index file but it is served as a gzip file rather then a gzip encoded html file.
This clearly won't work this way.
This is a part of the module source:
if (r->uri.data[r->uri.len - 1] == '/') {
return NGX_DECLINED;
}
So if the uri ends in slash, it does not even look for the gzipped version.
But, you probably could hack around using rewrite.
(This is a guess, I have not tested it)
rewrite ^(.*)/$ $1/index.html;
Edit: To make it work with autoindex (guess) you can try using this instead of rewrite:
location ~ /$ {
try_files ${uri}/index.html $uri;
}
It probably is better overall than using rewrite. But you need to try ...
You can prepare your precompressed files then serve it.
Below it's prepared by PHP and served without checking if the client supports gzip.
// PHP prepare the precompressed gzip file
file_put_contents('/var/www/static/gzip/script-name.js.gz', gzencode($s, 9));
// where $s is the string containing your file to pre-compress
// NginX serve the precompressed gzip file
location ~ "^/precompressed/(.+)\.js$" {
root /var/www;
expires 262144;
add_header Content-Encoding gzip;
default_type application/javascript;
try_files /static/gzip/$1.js.gz =404;
}
# Browser request a file - transfert 113,90 Kb (uncompressed size 358,68 Kb)
GET http://inc.ovh/precompressed/script-name.js
# Response from the server
Accept-Ranges bytes
Cache-Control max-age=262144
Connection keep-alive
Content-Encoding gzip
Content-Length 113540
Content-Type application/javascript; charset=utf-8
ETag "63f00fd5-1bb84"
Server NginX
I'm trying to tell nginx to cache some of my assets (js, css) forever, or at least for a very long time.
The idea is that once an asset bundle is compiled and published with an /assets/ URI prefix (e.g. /assets/foo-{fingerprint}.js) it stays there and doesn't ever need to change.
The internets told me I should write the following rule:
location ~ ^/assets/.*-([^.]+)\.(js|css)$ {
gzip_static on; # there's also a .gz of the asset
expires max;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
I would expect this would result in responses with HTTP code 304 "Not Modified", but what I get is a consistent HTTP 200 (OK) every time.
I have tried some other approaches, for instance:
a) explicitly setting modification time to a constant point in time in the past;
add_header Last-Modified "Thu, 01 Jan 1970 00:00:00 GMT";
b) switching to If-None-Match checks;
add_header ETag $1;
if_modified_since off;
However, the only thing that really worked as needed was this:
add_header Last-Modified "Thu, 01 Jan 2030 00:00:00 GMT";
if_modified_since before;
I'm lost. This is contrary to everything I thought was right. Please help.
You should change your internets, since they give you wrong advices.
Just remove all add_header lines from your location (as well as surplus brake):
location ~ ^/assets/.*-([^.]+)\.(js|css)$ {
gzip_static on; # there's also a .gz of the asset
expires max;
}
and read the docs from the true Internet: http://nginx.org/r/expires and https://www.rfc-editor.org/rfc/rfc2616
It seems part of my configuration. During my researching I realized that browser uses heuristic analysis to validate requests with ConditionalGet headers (E-Tag, Last-Modified). It makes a lot of sense for back-end responses, so you can handle that to save server resources.
But in terms of static files (js, css, images), you can tell browser to serve them straight away without any Conditional Get validation. It is helpful if you update file name if any change takes place.
This part of configuration makes it happen:
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
I am looking for a nginx config setup that does setup the Access-Control-Allow-Origin to the value received in the Origin.
It seems that the * method doesn't work with Chrome and the multiple URLs doesn't work with Firefox as it is not allowed by CORS specification.
So far, the only solution is to setup the Access-Control-Allow-Origin to the value received in the origin (yes some validation could be implemented).
The question is how to do this in nginx, preferably without installing additional extensions.
set $allow_origin "https://example.com"
# instead I want to get the value from Origin request header
add_header 'Access-Control-Allow-Origin' $allow_origin;
Using if can sometimes break other config such as try_files. You can end up with unexpected 404s.
Use map instead
map $http_origin $cors_header {
default "";
"~^https?://[^/]+\.example\.com(:[0-9]+)?$" "$http_origin";
}
server {
...
location / {
add_header Access-Control-Allow-Origin $cors_header;
try_files $uri $uri/ /index.php;
}
...
}
If is evil
I'm starting to use this myself, and this is the line in my current Nginx configuration:
add_header 'Access-Control-Allow-Origin' "$http_origin";
This sets a header to allow the origin of the request as the only allowed origin. So where ever you are coming from is the only place allowed. So it shouldn't be much different than allowing "*" but it looks more specific from the browser's perspective.
Additionally you can use conditional logic in your Nginx config to specify a whitelist of hostnames to allow. Here's an example from https://gist.github.com/Ry4an/6195025
if ($http_origin ~* (whitelist\.address\.one|whitelist\.address\.two)$) {
add_header Access-Control-Allow-Origin "$http_origin";
}
I plan to try this technique in my own server to whitelist the allowed domains.
Here is a part of a file from conf.f directory where people always describes their virtual hosts of Nginx.
$http_origin compares with list of allowed_origins and then in second map block the system decides what will write to "header Access-Control-Allow-Origin" according to allowed list.
Here is a part of code.
#cat /etc/nginx/conf.d/somehost.conf
map $http_origin $origin_allowed {
default 0;
https://xxxx.yyyy.com 1;
https://zzz.yyyy.com 1;
}
map $origin_allowed $origin {
default "";
1 $http_origin;
}
server {
server_name somehost.com;
#...[skipped text]
add_header Access-Control-Allow-Origin $origin always;
#....[skipped text]
}
I test it om my servers. All works fine.
Have a nice day & be healthy,
Eugene.