I have a stylesheet gzip compressed on disk and would like to serve it via nginx. It's named file.css.xgz and it should have
Content-Type: text/css
Content-Compression: gzip
So I added this to mime.types:
text/css css css.xgz;
And this to my server configuration:
location ~* \.xgz$ {
add_header Content-Encoding gzip;
}
Server is restarted for sure, but the content-type is still application/octet-stream (Content-Encoding is set as expected).
Try just "xgz" in the mime type in place of "css.xgz"
Related
I have a question about serving gzipped static files from nginx. I did gzip -k style.min.css to produce style.min.css.gz and I uploaded it to the server in the static-root directory. My location block is an exact match and looks like this:
location =/style.min.css {
root /home/ubuntu/.../static-root/;
gzip_static on;
expires 100d;
add_header Cache-Control "public";
access_log off;
}
Will nginx just serve up the style.min.css.gz in place of the style.min.css automagically, or do I have to tweak that location block so that the gzipped version is served?
Given the exact match at that location block this test does 404
$ curl -I https://example.com/style.css.gz -H "Accept-Encoding: gzip"
HTTP/1.1 404 Not Found
Is the compressed version still getting served up or do I need to tweak the location block to something like this so that the .gz file gets served?
location ~ /(style.min.css.*) {...
Update ... I can confirm that a gzipped version of a static css file is returned. So it seems to happen automatically. I don't know if the gzip on; in the server section or the gzip_static on; takes care of it but it is working.
$ curl -H "Accept-Encoding: gzip" -I https://example.com/bsmin.css
HTTP/1.1 200 OK
Cache-Control: max-age=7673705, public
Cache-control: no-cache="set-cookie"
Content-Encoding: gzip
From the official documentation https://docs.nginx.com/nginx/admin-guide/web-server/compression/ under Sending Compressed Files
to service a request for /path/to/file, NGINX tries to find and send the file /path/to/file.gz. If the file doesn’t exist, or the client does not support gzip, NGINX sends the uncompressed version of the file.
Note that the gzip_static directive does not enable on-the-fly compression. It merely uses a file compressed beforehand by any compression tool. To compress content (and not only static content) at runtime, use the gzip directive.
I have create a new stack where Nginx server act as a reverse proxy between a CDN server and the browser, and the Nginx server supposed to resolve SSIs of all the HTML files from the CDN.
The issue is Nginx server resolves only Content-type:text/html type files not application/octet-stream (even though all of them are actual .html files despite the content-type, it is a glitch on our company's CDN)
location /path/ {
ssi on;
add_header Content-Type text/html;
proxy_pass https://example-cdn.com/path/;
}
Is there a way to force Nginx to resolve any file as long as the extension is .html despite the Content-type header in the CDN response?
I created a secondary server in the same config, rewriting the Content-Type based on the extension and consuming the content for the primary nginx server from there.
Secondary server in the same config
# Mapping
map $uri $custom_content_type {
default "application/octet-stream";
~(.*.json)$ "application/json";
~(.*.html)$ "text/html";
~(.*.pdf)$ "application/pdf";
}
server {
listen 8090;
server_name localhost;
location /path/ {
proxy_hide_header Content-Type;
add_header Content-Type $custom_content_type;
proxy_pass https://example-cdn.com/path/;
}
}
Primary server
location /path/ {
proxy_pass http://localhost:8090/path/;
}
still open for a more efficient solution.
I'm trying to follow this guide: https://www.digitalocean.com/community/tutorials/how-to-implement-browser-caching-with-nginx-s-header-module-on-ubuntu-16-04
but every time I execute curl -I http://myjsfile.com/thejsfile.js it doesn't return the cache property
i.e this one:
Expires: Thu, 01 Jan 1970 00:00:01 GMT
Cache-Control: no-cache
this is what I have in my sites-available file. though there are 2 in there the default and our custom one for Certbot SSL certs. I did apply this to those 2 files.
# Expires map
map $sent_http_content_type $expires {
default off;
text/html epoch;
text/css max;
application/javascript max;
~image/ epoch;
}
So I'm not sure it's caching it and when I checked it using gtmetrix it still gets an F for browser caching.
I also tried this one: NGINX cache static files
and I have this in my nginx.conf inside http
server {
location ~* \.(?:ico|css|js)$ {
expires 30d;
add_header Vary Accept-Encoding;
access_log off;
}
}
but it still didn't work when I checked using the curl command.
so can someone enlighten me on what I'm doing wrong here or is this not the best approach to cache JS and CSS files?
I have the following problem:
https://my.domain.com/js/some_js_file.js
Is loading well but:
https://my.domain.com/some_other_folder/some_js_file.js
Brings up following error:
Refused to execute script from 'https://my.domain.com/some_other_folder/some_js_file.js' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled.
This is now happening after a security update of our loadbalancer.
I tried several things.
Add:
location ~ \.js {
add_header Content-Type application/x-javascript;
}
Or:
location ~ \.js {
add_header Content-Type application/javascript;
}
changed /etc/nginx/mime.types
From:
application/x-javascript js;
To:
application/javascript js;
Nothing works and I don't get it that not all files are throwing this error. Only the files in /some_other_folder/ or maybe outside of /js/
I have found an interesting problem.
I am trying to serve some gzipped files without the sources using NGINX's gzip_static module (I know the downsides to this). This means you can have gzipped files on the server that will be served with transfer-encoding: gzip. For example, if there's a file /foo.html.gz, a request for /foo.html will be served the compressed file with content-encoding: text/html.
While this usually works it turns out that when looking for index files in a directory the gzipped versions are not considered.
GET /index.html
200
GET /
403
I was wondering if anyone knows how to fix this. I tried setting index.html.gz as in index file but it is served as a gzip file rather then a gzip encoded html file.
This clearly won't work this way.
This is a part of the module source:
if (r->uri.data[r->uri.len - 1] == '/') {
return NGX_DECLINED;
}
So if the uri ends in slash, it does not even look for the gzipped version.
But, you probably could hack around using rewrite.
(This is a guess, I have not tested it)
rewrite ^(.*)/$ $1/index.html;
Edit: To make it work with autoindex (guess) you can try using this instead of rewrite:
location ~ /$ {
try_files ${uri}/index.html $uri;
}
It probably is better overall than using rewrite. But you need to try ...
You can prepare your precompressed files then serve it.
Below it's prepared by PHP and served without checking if the client supports gzip.
// PHP prepare the precompressed gzip file
file_put_contents('/var/www/static/gzip/script-name.js.gz', gzencode($s, 9));
// where $s is the string containing your file to pre-compress
// NginX serve the precompressed gzip file
location ~ "^/precompressed/(.+)\.js$" {
root /var/www;
expires 262144;
add_header Content-Encoding gzip;
default_type application/javascript;
try_files /static/gzip/$1.js.gz =404;
}
# Browser request a file - transfert 113,90 Kb (uncompressed size 358,68 Kb)
GET http://inc.ovh/precompressed/script-name.js
# Response from the server
Accept-Ranges bytes
Cache-Control max-age=262144
Connection keep-alive
Content-Encoding gzip
Content-Length 113540
Content-Type application/javascript; charset=utf-8
ETag "63f00fd5-1bb84"
Server NginX