Nginx try_files - log hit / miss - nginx

I have two mounts:
/mnt/nfs - an NFS mount that contain a lot of files.
/mnt/ssd - a local SSD disk that acts as cache for the above nfs mount.
In nginx I have configure a location like so:
location ~ /my_location/(.*) {
alias /mnt/;
try_files ssd/$1 nfs/$1 =404;
}
This itself works just fine.
I would like to log when the file was found from ssd and when it was required to fetch it from the nfs mount.
Finding the file from ssd would be logged as HIT.
Having to go to nfs would be logged as MISS.
How might I achieve this?

One possible solution that I just came up with, using an extra named location:
location /my_location/ {
alias /mnt/ssd/;
set $file_source ssd;
try_files $uri #nfs;
}
location #nfs {
alias /mnt/nfs/;
set $file_source nfs;
try_files $uri =404;
}
Now you know what to do with that new variable.

Related

Add location station in Nginx config file in Synology NAS

I need to modify the nginx config file (/etc/nginx/app.d/server.webstation-vhost.conf) to add one line, which is for Laravel routing work correctly.
location / { try_files $uri $uri/ /index.php?$query_string; }
The problem is /etc/nginx/app.d/server.webstation-vhost.conf will ALWAYS OVERWITTEN once reboot the NAS,
Does anybody having experience how to hand this problem.
Many Thanks !
Not sure if you figured this out, but if you haven't, under that vhost conf file (/etc/nginx/app.d/server.webstation-vhost.conf), look for something like:
include /usr/local/etc/nginx/conf.d/f2f0a62b-74d6-4c34-a745-d0156f13c9d6/user.conf*;
Instead of f2f0a62b-74d6-4c34-a745-d0156f13c9d6 you should see another unique id for your nginx app, create/edit the mentioned user.conf file (without asterisk) with the contents you need, in my case I created a file with the contents below:
location / {
try_files $uri $uri/ /index.html;
}
Then I had to restart nginx with the command sudo synoservice --restart nginx.
And it worked.
PS.: I believe it should work for any DSM v6.1 or later (maybe 6.0.x as well).
For research I used:
https://community.synology.com/enu/forum/1/post/122043
https://community.synology.com/enu/forum/1/post/120538

How do I correctly use try_files when looking in two different directories for files to serve?

I'm quite new to Nginx so I might be misunderstanding of what try_files can do.
For my local development set up I have multiple installations that will each be accesible via their own subdomain. These installations are being migrated into a new folder structure but I still want to have the ability to support both at the same time. When pulled via git the new full path looks like this :
/home/tom/git/project/v3/[installation]/public/
The old structure goes 1 directory deeper namely as follows:
/home/tom/git/project/v3/[installation]/workspace/public
Where installation is variable according to the installation name and the /public folder will be the root for nginx to work from.
The root is determined by the subdomain and is extracted via regex like so:
server_name ~^(?<subdomain>[^.]+)\.local\.project\.test;
So far I've managed to get all this working for one of the folder structures but not both at the same time. My Nginx configuration for this local domain looks like this. Below is what I've tried but just can't seem to get working. As soon as I pass the #workspace named location as fallback for try_files it always defaults to 404.
index index.html index.htm index.nginx-debian.html index.php;
server_name ~^(?<subdomain>[^.]+)\.local\.project\.test;
root /home/tom/git/project/v3/$subdomain/public/;
location / {
try_files $uri #workspace =404;
}
location #workspace {
root /home/tom/git/project/v3/$subdomain/workspace/public/;
try_files $uri =404;
}
I have also tried shortening the root and passing the following parameters to try_files
root /home/tom/git/project/v3/$subdomain;
location / {
try_files /public/$uri /workspace/public/$uri =404;
}
But this still defaults to a 404, with a $uri/ as a third parameter there it will emit a 403 forbidden trying to list the directory index of the root.
I hope someone can provide some advice or an alternative as to how to approach this issue I am facing. If I need to provide additional data let me know,
Thanks in advance.
The named location must be the last element of a try_files statement.
For example:
location / {
try_files $uri #workspace;
}
location #workspace {
...
}
See this document for details.
The $uri variable includes a leading /, so your constructed pathnames contain a // which may be why they fail.
For example:
location / {
root /home/tom/git/project/v3/$subdomain;
try_files /public$uri /workspace/public$uri =404;
}

Use try_files with different document roots

I know there is a suggested solution in How to use try_files with 2 or more roots but it does not quite fit what I try to achieve.
We are in the process of migrating a big old webserver with 100,000s of pages to a new webserver. During this process we update the content. The directory for the new content was created from scratch. While we are updating the content we want to make sure that if something is missing in the new folder it can be retrieved from the old one.
My simplified folder structure looks like this:
/mnt/oldcontent
/var/opt/data/company/newcontent
For our scenario the ideal solution would be if we could do something like this:
location / {
try_files /var/opt/data/company/newcontent/$uri /mnt/oldcontent/$uri ...;
}
I know this is invalid syntax.
Your solution would need the root to be set to the root of the filesystem.
As a location can only serve a single root, you could use a named location to try the other one.
For example:
root /var/opt/data/company/newcontent;
location / {
try_files $uri $uri/ #oldcontent;
}
location #oldcontent {
root /mnt/oldcontent;
try_files $uri =404;
}
See this document for details.

NGINX Configuration on Digital Ocean (fallback)

I have configured my local nginx server with this snippet:
location / {
root html;
index index.html;
try_files $uri $uri /index.html;
}
which is inside http { server { ... } }. I also have an nginx server on Digital Ocean, and I put the above snippet in the same location (...) in the file /etc/nginx/nginx.conf. However, those changes are ignored by the server, i.e., the behavior after is the same as the behavior before (and unlike the behavior of the local server).
What am I doing wrong?
After every change you make to your nginx files (or to your website) it is important to reload nginx to apply the changes by using the following in the terminal/command line: sudo nginx -s reload
You can also run nginx -t to test nginx for any conflicts.
You can (and probably should) run them together to make it easier:
sudo nginx -t && sudo nginx -s reload
It is necessary to make the change
try_files $uri $uri/ =404;
to
try_files $uri $uri/ /index.html;
in two places in /etc/nginx/sites-available also.

nginx with site in a subdir which does not match the ends of url

When I try to use laravel PHP framework, I try to place it in a dir called /home/usr/proj/laravel, but as we know that the public html of laravel is settled in /home/usr/proj/laravel/public, thus my problem is how to make the setting of nginx such that when I access by mysite.com/laravel/ or mysite.com/laravel, we in fact redirected to the location laravel/public/index.php.
Also, it seems that there is a rule of nignx which is suggested by the official of laravel, to make the url looks pretty
location / {
try_files $uri $uri/ /index.php?$query_string;
}
How can I use this in my case?
UPDATE
It seems the following code works for me (but give me error NotFoundHttpException in RouteCollection.php line 145:, maybe caused by my router setting)
location /laravel{
root /home/usr/proj/laravel/public;
index index.php index.html;
try_files $uri $uri/ /laravel/public/index.php?$query_string;
}
Regarding your Update, I think that you should keep your original try_files syntax:
try_files $uri $uri/ /index.php?$query_string;
since the location is set to /laravel and the root is in the public folder. The way it is currently written ends up looking for file /home/usr/proj/laravel/public/public/index.php in the disk.
You should also check to configure your application URL so that it contains the /location part of the URL. I am not quite sure about how Laravel 5 is configured since my experience is with Laravel 4.

Resources