I am trying to setup nginx to cache my static files, such as jpg, jpeg, css and js.
The following is what my nginx.conf (not nginx.conf.default) looks like
events {}
http {
server {
listen 3000;
# listen 443 ssl;
server_name localhost;
# Set the SSL header only for HTTPS requests.
if ($scheme = "https") {
set $ssl_header SSL;
}
error_page 401 = #error401;
client_max_body_size 1000M;
# Expire rules for static content
location / {
# HTTPS header.
proxy_set_header X-Proto $ssl_header;
proxy_set_header Host $host;
proxy_set_header Client-IP $remote_addr;
# Redirection host.
proxy_pass http://localhost:300x;
}
location ~* .(jpg|jpeg|css|js)$ {
expires 1d;
access_log off;
add_header Cache-Control "public";
}
I have then tried to add the following to cache:
location ~* .(jpg|jpeg|css|js)$ {
expires 1d;
access_log off;
add_header Cache-Control "public";
}
I get 404 errors for all my CSS & JS files. My JPG and JPEG files get cached. I have tired adding in the root directory here, and the in location \ {} block. This does not seem to make a difference.
I have tried many different configurations - I just want to know how to cache my css & js files!
For reference: this is the location block from my nginx.conf.default file:
location / {
root html;
index index.html index.htm;
}
I have tried adding the root in the server block, no luck
Related
I have a website developed using Laravel + Nuxt. I am using Nginx to run the website. On nuxt generate + nuxt start I am redirected to 404 but on live website I am getting infinte Loading: https://flowerqueen.ro/aiusdhiusadfisadiufh. Played around a lot with config file and checked other answers on stack, nothing helps :(
This is my nginx config:
map $sent_http_content_type $expires {
default on;
text/html epoch;
text/css max;
application/javascript max;
~image/ max;
}
server {
# redirect all HTTP to HTTPS
listen 80;
expires $expires;
index index.php index.html;
server_name flowerqueen.ro www.flowerqueen.ro;
return 301 https://flowerqueen.ro$request_uri;
}
server {
listen 443 ssl;
server_name flowerqueen.ro www.flowerqueen.ro;
#ssl on;
ssl_certificate /etc/ssl/certificate.crt;
ssl_certificate_key /etc/ssl/private.key;
index index.php index.html;
error_page 404 /404.html;
# expires $expires;
location / {
proxy_pass http://localhost:4000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_cache cache;
proxy_cache_key $host$uri$is_args$args;
proxy_cache_valid 200 301 302 12h;
# try_files $uri $uri/ /404.html;
# add_header X-Proxy-Cache $upstream_cache_status;
}
location ^~ /images {
proxy_cache cache;
proxy_cache_valid 200 301 302 12h;
}
}
Nuxt:
ssr: true,
target: 'static',
server: {
port: 4000,
host: 'localhost',
},
Finally solved the issue, google will be happy now.
The problem was here:
generate: {
fallback: '404.html',
I had the page 404.vue under /pages directory but this was not working.
Changing and moving 404.vue to 404/index.vue and changing the config to:
generate: {
fallback: '404/index.html',
Solved the issue for me.
Also I have removed all unnecessary code from 404 page.
Thank you all for the help :)
I like add cache control header with nginx for some extensions such as .jpg, etc but so far some of the solutions I found on the net, I couldn't get it to work. I will tell you what I have tried.
I have tried variations of the following in different place in the .conf file of my site and when I tried the site become blank and I found a lot of 404 errors on the console. The site is developed in React.
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1d;
add_header Cache-Control "public, no-transform";
}
My conf file looks like the following. The thing is I have to do reverse proxy as the sites are actually hosted in Docker containers.
server {
server_name mysite.net;
root /usr/share/nginx/html;
location / {
proxy_pass http://web:3005/;
}
location /api/ {
rewrite ^/api(/.*)$ $1 break;
proxy_pass http://api:5005/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
fastcgi_read_timeout 1200;
proxy_read_timeout 1200;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mysite.net-0001/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.net-0001/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = mysite.net) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name mysite.net;
return 404; # managed by Certbot
}
Use the map directive:
map $cache $control {
1 "public, no-transform";
}
map $cache $expires {
1 1d;
default off; # or some other default value
}
map $uri $cache {
~*\.(js|css|png|jpe?g|gif|ico)$ 1;
}
server {
...
expires $expires;
add_header Cache-Control $control;
...
}
(you can also put expires and add_header directives into the location context or even leave then in the http context). nginx won't add the header at all (or modify an existing header) if the value calculated via the map expression for $control variable will be an empty string. This is not the only possible solution, you can also rely on the Content-Type response header from your upstream (see this answer for an example).
You should be aware of this documentation excerpt:
There could be several add_header directives. These directives are inherited from the previous configuration level if and only if there are no add_header directives defined on the current level.
Problem:
When I attempt to add caching logic to my NGINX server.conf file all static assets receive a 404
Goal:
I would like to serve specific files types within the /static/* file directory with caching headers via NGINX. I would like these headers to tell clients (browsers) to cache static files locally.
Context:
I will be using www.example.com instead of my own domain in this example.
I have two docker containers that are aware of each other. One is an NGINX server that receives web connections. The other is a Flask container that does backend processing and creates dynamic HTML templates with JINJA.
The NGINX container has a folder called /static which contains files such as .css, .js, .png, .jpg, and others. The /static folder structure looks like this:
/static file structure
/static
├── /assets
│ └── sitemap.xml
│ └── otherfiles...
└── /img
│ └── images...
└── /js
│ └── jsFiles...
└── /css
│ └── cssFiles...
NGINX Config - WORKING
server{
listen 80;
server_name <www.example.com>;
return 301 <https://www.example.com>$request_uri;
}
server {
modsecurity on;
modsecurity_rules_file /etc/nginx/modsec/main.conf;
listen 443 http2 ssl;
server_name <www.mydomain.com>;
ssl_certificate <certpath>
ssl_certificate_key <privatekeypath>
large_client_header_buffers 4 16k;
location / {
proxy_pass http://flask-app-docker:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /robots.txt {
add_header Content-Type text/plain;
return 200 "User-agent: *\nAllow: /\nSitemap: https://<www.example.com>/sitemap.xml";
}
location /sitemap.xml {
add_header Content-Type application/xml;
try_files $uri /static/assets/sitemap.xml;
}
location /static {
rewrite ^/static(.*) /$1 break;
root /static;
}
}
NGINX Config - BROKEN
server{
listen 80;
server_name <www.example.com>;
return 301 <https://www.example.com>$request_uri;
}
server {
modsecurity on;
modsecurity_rules_file /etc/nginx/modsec/main.conf;
listen 443 http2 ssl;
server_name <www.mydomain.com>;
ssl_certificate <certpath>
ssl_certificate_key <privatekeypath>
large_client_header_buffers 4 16k;
# ------ Start of caching section which breaks things ------
location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
expires 30d;
add_header Vary Accept-Encoding;
access_log off;
}
# ----------------------------------------------------------
location / {
proxy_pass http://flask-app-docker:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /robots.txt {
add_header Content-Type text/plain;
return 200 "User-agent: *\nAllow: /\nSitemap: https://<www.example.com>/sitemap.xml";
}
location /sitemap.xml {
add_header Content-Type application/xml;
try_files $uri /static/assets/sitemap.xml;
}
location /static {
rewrite ^/static(.*) /$1 break;
root /static;
}
}
NGINX Config - DIFF
Here are the lines which are added to the file and break things:
# ------ Start of caching section which breaks things ------
location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
expires 30d;
add_header Vary Accept-Encoding;
access_log off;
}
# ----------------------------------------------------------
EDIT 11/9/2020
Still unable to get caching working based on the ending file extension. However, I managed to get a work around going by placing static files into their own folder and then set cache headers on all files.
Updated the /static location block in my example above to look like this.
I may eventually update this to be the answer as it is working even though it is not the perfect solution or the answer I was looking for.
location /static {
root /static;
expires 30d;
add_header Vary Accept-Encoding;
}
I'm adding some https pages to my rails site. In order to test it locally, i'm running my site under one mongrel_rails instance (on 3000) and nginx.
I've managed to get my nginx config to the point where i can actually go to the https pages, and they load. Except, the javascript and css files all fail to load: looking in the Network tab in chrome web tools, i can see that it is trying to load them via an https url. Eg, one of the non-working file urls is
https://cmw-local.co.uk/stylesheets/cmw-logged-out.css?1383759216
I have these set up (or at least think i do) in my nginx config to redirect to the http versions of the static files. This seems to be working for graphics, but not for css and js files.
If i click on this in the Network tab, it takes me to the above url, which redirects to the http version. So, the redirect seems to be working in some sense, but not when they're loaded by an https page. Like i say, i thought i had this covered in the second try_files directive in my config below, but maybe not.
Can anyone see what i'm doing wrong? thanks, Max
Here's my nginx config - sorry it's a bit lengthy! I think the error is likely to be in the first (ssl) server block:
NOTE: the urls in here (elearning.dev, cmw-dev.co.uk, etc) are all just local host names, ie they're all just aliases for 127.0.0.1.
server {
listen 443 ssl;
keepalive_timeout 70;
ssl_certificate /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.crt;
ssl_certificate_key /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.key;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk;
root /home/max/work/charanga/elearn_container/elearn;
# ensure that we serve css, js, other statics when requested
# as SSL, but if the files don't exist (i.e. any non /basket controller)
# then redirect to the non-https version
location / {
try_files $uri #non-ssl-redirect;
}
# securely serve everything under /basket (/basket/checkout etc)
# we need general too, because of the email/username checking
location ~ ^/(basket|general|cmw/account/check_username_availability) {
# make sure cached copies are revalidated once they're stale
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
# this serves Rails static files that exist without running
# other rewrite tests
try_files $uri #rails-ssl;
expires 1h;
}
location #non-ssl-redirect {
return 301 http://$host$request_uri;
}
location #rails-ssl {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_read_timeout 180;
proxy_next_upstream off;
proxy_pass http://127.0.0.1:3000;
expires 0d;
}
}
#upstream elrs {
# server 127.0.0.1:3000;
#}
server {
listen 80;
server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk;
root /home/max/work/charanga/elearn_container/elearn;
access_log /home/max/work/charanga/elearn_container/elearn/log/access.log;
error_log /home/max/work/charanga/elearn_container/elearn/log/error.log debug;
client_max_body_size 50M;
index index.html index.htm;
# gzip html, css & javascript, but don't gzip javascript for pre-SP2 MSIE6 (i.e. those *without* SV1 in their user-agent string)
gzip on;
gzip_http_version 1.1;
gzip_vary on;
gzip_comp_level 6;
gzip_proxied any;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; #text/html
# make sure gzip does not lose large gzipped js or css files
# see http://blog.leetsoft.com/2007/7/25/nginx-gzip-ssl
gzip_buffers 16 8k;
# Disable gzip for certain browsers.
#gzip_disable "MSIE [1-6].(?!.*SV1)";
gzip_disable "MSIE [1-6]";
# blank gif like it's 1995
location = /images/blank.gif {
empty_gif;
}
# don't serve files beginning with dots
location ~ /\. { access_log off; log_not_found off; deny all; }
# we don't care if these are missing
location = /robots.txt { log_not_found off; }
location = /favicon.ico { log_not_found off; }
location ~ affiliate.xml { log_not_found off; }
location ~ copyright.xml { log_not_found off; }
# convert urls with multiple slashes to a single /
if ($request ~ /+ ) {
rewrite ^(/)+(.*) /$2 break;
}
# X-Accel-Redirect
# Don't tie up mongrels with serving the lesson zips or exes, let Nginx do it instead
location /zips {
internal;
root /var/www/apps/e_learning_resource/shared/assets;
}
location /tmp {
internal;
root /;
}
location /mnt{
root /;
}
# resource library thumbnails should be served as usual
location ~ ^/resource_library/.*/*thumbnail.jpg$ {
if (!-f $request_filename) {
rewrite ^(.*)$ /images/no-thumb.png
break;
}
expires 1m;
}
# don't make Rails generate the dynamic routes to the dcr and swf, we'll do it here
location ~ "lesson viewer.dcr" {
rewrite ^(.*)$ "/assets/players/lesson viewer.dcr" break;
}
# we need this rule so we don't serve the older lessonviewer when the rule below is matched
location = /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf {
rewrite ^(.*)$ /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf break;
}
location ~ v6lessonViewer.swf {
rewrite ^(.*)$ /assets/players/v6lessonViewer.swf break;
}
location ~ lessonViewer.swf {
rewrite ^(.*)$ /assets/players/lessonViewer.swf break;
}
location ~ lgn111.dat {
empty_gif;
}
# try to get autocomplete school names from memcache first, then
# fallback to rails when we can't
location /schools/autocomplete {
set $memcached_key $uri?q=$arg_q;
memcached_pass 127.0.0.1:11211;
default_type text/html;
error_page 404 =200 #rails; # 404 not really! Hand off to rails
}
location / {
# make sure cached copies are revalidated once they're stale
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
# this serves Rails static files that exist without running other rewrite tests
try_files $uri #rails;
expires 1h;
}
location #rails {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_read_timeout 180;
proxy_next_upstream off;
proxy_pass http://127.0.0.1:3000;
expires 0d;
}
}
EDIT: It just occurred to me that this might be better on superuser or serverfault, or perhaps both. I'm not sure what the cross-site posting rules are.
I'm currently using Nginx coupled with Apache, serving following static files with success.
Extract from my /sites-enabled/default:
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|txt|xml)$ {
root /home/website/public_html/app/public;
}
But I also have some cache files located here:
/home/website/public_html/app/storage/cache
/home/website/public_html/app/storage/views
/home/website/public_html/app/storage/sessions
/cache and /sessions also have sub-directories. All sub-directories and files have random filename, and no extension.
I want Nginx to also serve these files.
I tried this (example for /views folder), but without success. I even have nothing in logs, but Nginx restart correctly, and website load with no errors.
location /views {
alias /home/website/public/app/app/storage/views;
access_log /var/log/nginx/web.views.access.log;
error_log /var/log/nginx/web.views.error.log;
}
Tried this also, but same effects as above:
location ~* ^.$ {
root /home/website/public/app/app/storage/views;
access_log /var/log/nginx/web.views.access.log;
error_log /var/log/nginx/web.views.error.log;
}
Also tried adding add_header Content-Type application/octet-stream; in these 2 tries, but no change.
Finally, here is the http part of my nginx.conf
http {
include /etc/nginx/mime.types;
access_log /var/log/nginx/access.log;
sendfile on;
keepalive_timeout 65;
tcp_nodelay on;
gzip on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
(/var/log/nginx/access.log and /var/log/nginx/error.log don't show up anything related to my issue, too).
Thanks for any clue and help !
EDIT
Complete current /sites-enabled/default file (and yes, there's a double /app/, it's normal ;) )
# You may add here your
# server {
# ...
# }
# statements for each of your virtual hosts
server {
listen 80; ## listen for ipv4
listen [::]:80 default ipv6only=on; ## listen for ipv6
server_name www.website.com website.com;
#access_log /var/log/nginx/localhost.access.log;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:8080/;
access_log off;
#root /var/www;
#index index.html index.htm;
}
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|txt|xml)$ {
root /home/website/public_html/app/public;
expires 30d;
}
location ~* ^.$ {
add_header Content-Type application/octet-stream;
root /home/website/public/app/app/storage/views;
access_log /var/log/nginx/bt.views.access.log;
error_log /var/log/nginx/bt.views.error.log;
}
}
The problem is, that you need something to identify the files without any extension. A sub-directory or something that’s always present within the request. Your regular expression only matched for requests that start end end with a dot (e.g. http://example.com/.). The following server configuration assumes that all URLs start with storage, as this would be the only possibility to identify those files.
Please note that I’m using the try_files directive to rewrite the internal path where nginx should look for the file. The root directive is not meant for what you want to achieve.
And last but not least, you should always nest location blocks with regular expressions. There is no limit in the nesting level. nginx will create some kind of tree data structure to search for the best matching location. So think of a tree while writing the blocks.
server {
listen 80 default;
listen [::]:80 default ipv6only=on;
server_name www.website.com website.com;
root /home/website/public_html/app;
location / {
# Matches any request for a URL ending on any of the extension
# listed in the regular expression.
location ~* \.(jpe?g|gif|css|png|js|ico|txt|xml)$ {
expires 30d
access_log off;
try_files /public$uri =404;
}
# Matches any request starting with storage.
location ~* ^/storage {
access_log /var/log/nginx/bt.views.access.log;
error_log /var/log/nginx/bt.views.error.log;
try_files /app$uri;
}
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:8080/;
access_log off;
}
}