Nginx as reverse-proxy: serving files without extension - nginx

I'm currently using Nginx coupled with Apache, serving following static files with success.
Extract from my /sites-enabled/default:
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|txt|xml)$ {
root /home/website/public_html/app/public;
}
But I also have some cache files located here:
/home/website/public_html/app/storage/cache
/home/website/public_html/app/storage/views
/home/website/public_html/app/storage/sessions
/cache and /sessions also have sub-directories. All sub-directories and files have random filename, and no extension.
I want Nginx to also serve these files.
I tried this (example for /views folder), but without success. I even have nothing in logs, but Nginx restart correctly, and website load with no errors.
location /views {
alias /home/website/public/app/app/storage/views;
access_log /var/log/nginx/web.views.access.log;
error_log /var/log/nginx/web.views.error.log;
}
Tried this also, but same effects as above:
location ~* ^.$ {
root /home/website/public/app/app/storage/views;
access_log /var/log/nginx/web.views.access.log;
error_log /var/log/nginx/web.views.error.log;
}
Also tried adding add_header Content-Type application/octet-stream; in these 2 tries, but no change.
Finally, here is the http part of my nginx.conf
http {
include /etc/nginx/mime.types;
access_log /var/log/nginx/access.log;
sendfile on;
keepalive_timeout 65;
tcp_nodelay on;
gzip on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
(/var/log/nginx/access.log and /var/log/nginx/error.log don't show up anything related to my issue, too).
Thanks for any clue and help !
EDIT
Complete current /sites-enabled/default file (and yes, there's a double /app/, it's normal ;) )
# You may add here your
# server {
# ...
# }
# statements for each of your virtual hosts
server {
listen 80; ## listen for ipv4
listen [::]:80 default ipv6only=on; ## listen for ipv6
server_name www.website.com website.com;
#access_log /var/log/nginx/localhost.access.log;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:8080/;
access_log off;
#root /var/www;
#index index.html index.htm;
}
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|txt|xml)$ {
root /home/website/public_html/app/public;
expires 30d;
}
location ~* ^.$ {
add_header Content-Type application/octet-stream;
root /home/website/public/app/app/storage/views;
access_log /var/log/nginx/bt.views.access.log;
error_log /var/log/nginx/bt.views.error.log;
}
}

The problem is, that you need something to identify the files without any extension. A sub-directory or something that’s always present within the request. Your regular expression only matched for requests that start end end with a dot (e.g. http://example.com/.). The following server configuration assumes that all URLs start with storage, as this would be the only possibility to identify those files.
Please note that I’m using the try_files directive to rewrite the internal path where nginx should look for the file. The root directive is not meant for what you want to achieve.
And last but not least, you should always nest location blocks with regular expressions. There is no limit in the nesting level. nginx will create some kind of tree data structure to search for the best matching location. So think of a tree while writing the blocks.
server {
listen 80 default;
listen [::]:80 default ipv6only=on;
server_name www.website.com website.com;
root /home/website/public_html/app;
location / {
# Matches any request for a URL ending on any of the extension
# listed in the regular expression.
location ~* \.(jpe?g|gif|css|png|js|ico|txt|xml)$ {
expires 30d
access_log off;
try_files /public$uri =404;
}
# Matches any request starting with storage.
location ~* ^/storage {
access_log /var/log/nginx/bt.views.access.log;
error_log /var/log/nginx/bt.views.error.log;
try_files /app$uri;
}
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:8080/;
access_log off;
}
}

Related

nginx domain with and without wildcard

I have nginx configuration file which has to server example.com and www.example.com.
server {
listen 80;
server_name example.com;
return 301 http://www.example.com$request_uri;
}
server {
listen 80;
server_name www.example.com;
auth_basic "example Login";
auth_basic_user_file /etc/nginx/.htpasswd;
root /projects/www/example;
index index.html;
location ~* \.(html|js|jpg|png|gif|css|perfumes|imgs|map|fonts|otf)$ {
index index.do index.html index.htm;
access_log off;
}
location /.protected {
access_log off;
auth_basic off;
}
location /health {
access_log off;
auth_basic off;
}
location / {
try_files $uri $uri/index.html;
}
location /hello {
proxy_pass http://localhost:8282;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
}
}
I am trying to protect this domain for now, cuz we are preparing the launch. But there is only link which should let the requests pass through without password, if a link has '.protected' in it.
For example,
www.example.com/.protected/file1.txt should be allowed without entering a password.
It is working ok, but the issue is example.com/.protected/file1.txt (without 'www').
If I type only domain name 'example.com' (without 'www'), it automatically redirects to www.example.com as configured, but 'example.com/.protected/file1.txt' doesn't redirect to 'www.example.com/.protected/file1.txt'. It seems if the domain name has some paths, the domain name (without 'www') doesn't redirect to 'www.example.com'
I am getting 'curl: (6) Could not resolve host:'
Is there anything wrong with my configuration file?

NGINX - 404 errors after caching static (CSS/JS) files

I am trying to setup nginx to cache my static files, such as jpg, jpeg, css and js.
The following is what my nginx.conf (not nginx.conf.default) looks like
events {}
http {
server {
listen 3000;
# listen 443 ssl;
server_name localhost;
# Set the SSL header only for HTTPS requests.
if ($scheme = "https") {
set $ssl_header SSL;
}
error_page 401 = #error401;
client_max_body_size 1000M;
# Expire rules for static content
location / {
# HTTPS header.
proxy_set_header X-Proto $ssl_header;
proxy_set_header Host $host;
proxy_set_header Client-IP $remote_addr;
# Redirection host.
proxy_pass http://localhost:300x;
}
location ~* .(jpg|jpeg|css|js)$ {
expires 1d;
access_log off;
add_header Cache-Control "public";
}
I have then tried to add the following to cache:
location ~* .(jpg|jpeg|css|js)$ {
expires 1d;
access_log off;
add_header Cache-Control "public";
}
I get 404 errors for all my CSS & JS files. My JPG and JPEG files get cached. I have tired adding in the root directory here, and the in location \ {} block. This does not seem to make a difference.
I have tried many different configurations - I just want to know how to cache my css & js files!
For reference: this is the location block from my nginx.conf.default file:
location / {
root html;
index index.html index.htm;
}
I have tried adding the root in the server block, no luck

Senaite LIMS (Plone 4.3.18) css not working on Nginx with https enabled

I've installed and set up senaite.lims, which is a Plone extension, running on Plone 4.3.18 installed by the Unified Installer, and adding senaite.lims to the buildout.cfg eggs.
It's running fine on port 8080, and I can get Nginx to work redirecting / to :8080, but when I start using https, suddenly the css of the site doesn't work anymore.
I looked at the source, and the produced html page shows a link to the stylesheet with http://.... which I don't know if may cause problems, but if I actually try to open the .css file in the browser it works fine.
I set up and tried both with port 80 redirecting the https, and serving both a version of http and https, but neither one would get the page to render using .css. If anyone has any tips, or sees something wrongly configured in the nginx below, any help would be greatly appreciated.
Here is my nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
default_type application/octet-stream;
include /etc/nginx/mime.types;
sendfile on;
keepalive_timeout 75;
upstream plone {
server 127.0.0.1:8080;
}
server {
listen 80;
listen 443 ssl http2;
server_name 99.99.99.99; # changed for posting on SO
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
error_log /var/log/nginx/nginx.vhost.error.log;
location / {
proxy_pass http://localhost:8080/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_buffer_size 128k;
proxy_buffers 8 128k;
proxy_busy_buffers_size 256k;
}
}
}
You missed to rewrite the URL, e.g:
rewrite ^(.*)$ /VirtualHostBase/$scheme/$host/senaite/VirtualHostRoot/$1 break;
Here is a complete working config for SENAITE:
server {
listen 80;
server_name senaite.mydomain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name senaite.mydomain.com;
# https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04
include snippets/ssl-senaite.mydomain.com.conf;
include snippets/ssl-params.conf;
include snippets/well-known.conf;
access_log /var/log/nginx/senaite.access.log;
error_log /var/log/nginx/senaite.error.log error;
# Allow Cross-Origin Resource Sharing from our HTTP domain
add_header "Access-Control-Allow-Origin" "http://senaite.ridingbytes.com";
add_header "Access-Control-Allow-Credentials" "true";
add_header "Access-Control-Allow-Methods" "GET, POST, OPTIONS";
add_header "X-Frame-Options" "SAMEORIGIN";
if ($http_cookie ~* "__ac=([^;]+)(?:;|$)" ) {
# prevent infinite recursions between http and https
break;
}
# rewrite ^(.*)(/logged_out)(.*) http://$server_name$1$2$3 redirect;
location / {
set $backend http://haproxy;
# API calls take a different backend w/o caching
if ($uri ~* "##API") {
set $backend http://api;
}
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
rewrite ^(.*)$ /VirtualHostBase/$scheme/$host/senaite/VirtualHostRoot/$1 break;
# proxy_pass $backend;
proxy_pass http://plone;
}
}

Nginx - reverse proxy a Ghost blog with /subfolder redirect

I have a working nginx instance with the rules below. But I'm having difficulties pointing all the requests to domain.com/ghost
I tried modifying the location / {} block to location /ghost/ {} but with no success. I just get a 404 from the ghost app. Any suggestions?
server {
listen 80;
server_name domain.com;
root /home//user/ghost/;
index index.php;
# if ($http_host != "domain.com") {
# rewrite ^ http://domain.com$request_uri permanent;
# }
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:2368;
}
location ~* \.(?:ico|css|js|gif|jpe?g|png|ttf|woff)$ {
access_log off;
expires 30d;
add_header Pragma public;
add_header Cache-Control "public, mustrevalidate, proxy-revalidate";
proxy_pass http://127.0.0.1:2368;
}
location = /robots.txt { access_log off; log_not_found off; }
location = /favicon.ico { access_log off; log_not_found off; }
location ~ /\.ht {
deny all;
}
}
I'm using a regexp location directive for a similar proxy setup. This is the minified configuration file:
worker_processes 1;
pid /path/to/file.pid;
worker_priority 15;
events {
worker_connections 512;
accept_mutex on;
}
http {
server {
error_log /path/to/log/error.log error;
listen 127.0.0.1:9000;
server_name example.com;
location ~* (/ghost) {
expires epoch;
proxy_no_cache 1;
proxy_pass http://localhost:1234;
}
location / {
proxy_pass http://localhost:1234;
}
}
}
Have solved similar problem with other apps which have no support for subfolders. Both apps are built on one platform, so they both tries to work in /fx dir. I had to place one of them in to subfolder /gpms .
The idea is to redirect requests with referer from subfolder to destinations which links outside of subfolder - i just add subfolder to beginning of such uris. It is not ideal, but it works.
Here is my nginx config:
server {
listen 80;
server_name mydomain.com;
location / {
rewrite ^/$ /fx/;
proxy_pass http://127.0.0.1:56943/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 300;
}
error_log /var/log/nginx/debug.log debug;
set $if_and_hack "";
if ( $http_referer ~ '^http://mydomain.com/gpms/.*$' ) {
set $if_and_hack "refgpms";
}
if ( $uri !~ '^/gpms/.*$' ) {
set $if_and_hack "${if_and_hack}_urinogpms";
}
if ( $if_and_hack = "refgpms_urinogpms" ) {
rewrite ^/(.*)$ http://$host/gpms/$1;
}
location /gpms/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cookie_path /fx /;
proxy_pass http://127.0.0.1:12788/fx/;
proxy_redirect default;
}
}
External links will be broken, but it is not critical for me and i guess it may be corrected.
$if_and_hack is for overcome nginx limitation on nested conditions.
By the way i have got a cookies issue, because they was set with path, and hit browser bug with not sending cookies for a new path after redirect, so i just remove path from cookies.
Note full link form in rewrite - this form of rewrite immediately redirects browser to new page, you should not change it to just "/gpms/$1".
As alternative, i guess, it may be possible to use nginx module to inspect html content and modify links. I have not tried this. Or consider to use subdomains instead of subfolders.
Good news! As of version 0.4.0 Ghost now supports subdirectory installation. And there are already people who've figured this out and created tutorials.

nginx one ipaddress but 2 sites served from subfolder

I have succesfully configured nginx. with default site it works correctly.
Now i have 2 sites, one at /home/bugz and another one /home/git/github/public. and only one ip 10.10.10.10 (i dont have dns setup hence cant use domain names)
i want to have the sites served at locations
http://10.10.10.10/bugz and http://10.10.10.10/github respectively
below are the two config files
server {
listen *:80;
server_name 10.10.10.10;
server_tokens off;
root /home/bugz;
# individual nginx logs for this gitlab vhost
access_log /var/log/nginx/bugzilla_access.log;
error_log /var/log/nginx/bugzilla_error.log;
location /bugz {
index index.html index.htm index.pl;
}
location ~ \.pl|cgi$ {
try_files $uri =404;
gzip off;
fastcgi_pass 127.0.0.1:8999;
fastcgi_index index.pl;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
and
upstream gitlab {
server unix:/home/git/gitlab/tmp/sockets/gitlab.socket;
}
server {
# listen *:80 default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea
listen *:80; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea
server_name 10.10.10.10; # e.g., server_name source.example.com;
server_tokens off; # don't show the version number, a security best practice
root /home/git/gitlab/public;
# individual nginx logs for this gitlab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location /{
# serve static files from defined root folder;.
# #gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html #gitlab;
}
# if a file, which is not found in the root folder is requested,
# then the proxy pass the request to the upsteam (gitlab unicorn)
location #gitlab {
proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gitlab;
}
}
How do i achieve this ?
Your nginx.conf should contain something like this inside the http block :
include /etc/nginx/sites-enabled/*;
Then you will have 2 configuration files in the /etc/nginx/sites-available folder. (which has symlinks directed from the sites-enabled folder.
Each conf will need to either have them listening on a different ports; ie one on port 80 and on one port 81
server1.conf
server {
listen 80;
server_name localhost;
server2.conf
server {
listen 81;
server_name localhost;
-OR-
Have a different servername for each server in the conf files and play with the hosts file.
I don't understand why this huge configs, I'd configure only 1 site with 2 locations
server {
server_name 10.10.10.10;
location /bugz {
root /root/to/bugz;
access_log /var/log/nginx/bugzilla_access.log;
error_log /var/log/nginx/bugzilla_error.log;
index index.html index.htm index.pl;
# try_files statement
}
location /git {
root /home/git/gitlab/public;
#access and error log and rest of config
}
location ~ \.pl|cgi$ { }
location #gitlab { }
}

Resources