Nginx location rules not applying - nginx

I want to run both WordPress and YOURLS on one domain which is configured by a NGINX server block (not the default site). Since both need to handle URLs differently, they need different try_files directives. WordPress sits on the root of the domain (domain.tld), while YOURLS is being installed to the /g/ directory. Despite the two location rules, I get 404s on any links generated by YOURLS (e.g. domain.tld/g/linkname, all are redirects to external URLs), though I can access the admin backend.
As far as I read, declaring to location rules (one for /g/, and one for /) should suffice in order to let NGINX handle the direct and the /g/ URLS differently - is there something in wrong in my thinking?
The try_files rules are correct and do work well on other single-application server block (WordPress as well as YOURLS on installs on separate server blocks).
The server block definition config looks like this:
server {
listen [::]:80;
listen 80;
server_name domain.tld www.domain.tld;
return 301 https://domain.tld$request_uri;
}
server {
listen [::]:443 ssl;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/domain.tld/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain.tld/privkey.pem;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
root /var/www/html/domain.tld;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html index.php;
server_name domain.tld www.domain.tld;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.2-fpm.sock;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors off;
}
location /g/ {
try_files $uri $uri/ /yourls-loader.php$is_args$args;
expires 14d;
add_header Cache-Control 'public';
}
location / {
try_files $uri $uri/ /index.php?$args;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}

The problem with the location /g/ try_files directive is that the path to the YOURLS loader isn't correct. If the URL handler (yourls-loader.php) lies within the /g directory, the path to it has to be changed to include the /g directory:
try_files $uri $uri/ /g/yourls-loader.php$is_args$args;
The location rule does not imply that each path is handled from that location as well, but rather from the root path given above.

Related

Setup phpMyAdmin inside website subdirectory

I have an NGINX web server with two domains and it also runs phpMyAdmin.
phpMyAdmin is working fine and I access it through the below non-https url:
public-ip-address/phpMyAdmin
This is how the symbolic link was setup:
sudo ln -s /usr/share/phpmyadmin/ /var/www/html
Is there a way I can point phpMyAdmin to a website's subdirectory?
For example, I would like to access the phpMyAdmin login page by accessing the following URL:
domain1.com/phpMyAdmin/
How can I achieve this? domain1.com has https enabled. So it would also secure my phpMyAdmin login.
The server block is the same as the default block for NGINX. I have created a new config file by copying it to domain.com in the /etc/NGINX/sites-available folder.
The only changes are in the server and root path tags. Rest everything is default.
server domain1.com www.domain1.com;
root /var/www/domain1.com/html/
I am using certbot for Let's Encrypt SSL certificates. My server block config is shared below:
# Server Block Config for domain1.com
server {
root /var/www/domain1.com/html;
# Add index.php to the list if you are using PHP
index index.php index.html index.htm index.nginx-debian.html;
server_name domain1.com www.domain1.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
# try_files $uri $uri/ =404;
try_files $uri $uri/ /index.php?q=$uri&$args;
}
# pass PHP scripts to FastCGI server
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/domain1.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/domain1.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.domain1.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = domain1.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name domain1.com www.domain1.com;
return 404; # managed by Certbot
}
Contents of /etc/nginx/snippets/fastcgi-php.conf:
# regex to split $uri to $fastcgi_script_name and $fastcgi_path
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# Check that the PHP script exists before passing it
try_files $fastcgi_script_name =404;
# Bypass the fact that try_files resets $fastcgi_path_info
# see: http://trac.nginx.org/nginx/ticket/321
set $path_info $fastcgi_path_info;
fastcgi_param PATH_INFO $path_info;
fastcgi_index index.php;
include fastcgi.conf;
Here is the location block that should work for you (at least the similar config works for me):
location ~* ^/phpmyadmin(?<pmauri>/.*)? {
alias /usr/share/phpmyadmin/;
index index.php;
try_files $pmauri $pmauri/ =404;
location ~ \.php$ {
include fastcgi.conf;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$pmauri;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
}
Place it before the default PHP handler location block, or the default PHP handler block will take precedence and this configuration won't work!
You can simply add another symlink to the domain1.com root, while keeping everything else same, like you did for the default domain.
sudo ln -s /usr/share/phpmyadmin/ /var/www/domain1.com/html
Am I missing something?
I came to this thread while looking for a solution to another problem (Nginx alias breaks due to try_files $uri alias bug) but since you are already using symlink to phpmyadmin for the site you access through IP, you can do the same for any domain.

Yourls Errors in WordPress

I have wordpress working well in mysite.com
but YOURLS which is installed in mysite.com/u is not working, when I click on any shortened link I get a 404 error (wordpress).
However, I get YOURLS to work by adding this to nginx.conf
location /u { try_files $uri $uri/ /u/yourls-loader.php;
But then WordPress links break.
Here is my default nginx.conf
I know the fix is to add this try_files $uri $uri/ /u/yourls-loader.php; somewhere in nginx.conf , but where to put it without breaking wordpress.?
=================== Update 1 =========================
I got this partially working. with same config, but I noticed that wordpress links that start with u doesn't work ex: http://example.com/understand-math instead it redirect to Error 403 - Forbidden
???
================ update 2 ============
ok I fixed it by just adding another slash / to location /u/ instead of location /u
YOURLs NGINX CONFIGURATION
server {
# Listen IPv4 & v6
listen 80;
listen [::]:80;
# Optional SSL stuff
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate example.com.crt;
ssl_certificate_key example.com.key;
# Server names
server_name example.com www.example.com;
# Root directory (NEEDS CONFIGURATION)
root /path/to/files;
# Rewrites
location / {
# Try files, then folders, then yourls-loader.php
# --- The most important line ---
try_files $uri $uri/ /yourls-loader.php;
# PHP engine
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php5-fpm.sock; # Can be different
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
}
GITHUB DOCUMENTATION

Nginx sites-available doesn't work

I've got this really simple server block under sites-available.
Problem: When I try to access to mydomain.com, Nginx returns a « 404 Not Found », but if I try to access to a file in particular, it works fine, like mydomain.com/index.php
server {
listen 80;
index index.php;
server_name mydomain.com;
root /home/myusername/sites/mydomain.com/htdocs;
access_log /home/myusername/sites/mydomain.com/logs/access.log;
error_log /home/myusername/sites/mydomain.com/logs/error.log;
location / {
try_files $uri =404;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
Note that:
my hosts file is configured ;
I restart Nginx after each edit ;
the access rights, user and group are correct ;
the error.log file is empty, the access.log returns me all the 404 ;
I tried to change the config by adding/removing some lines, still no changes ;
the site is enabled in sites-enabled with a correct symlink (I tried to edit it and it opened the right file) ;
I've got a few sites on the same server who runs well (so the including of sites-available and sites-enabled is OK, and Nginx works fine).
So, the answer was giver to me on ServerFault by Alexey Ten, here is a copy of the answer
Your try_files directive is too restrictive and, I guess, is in wrong place.
Either remove location / completely, it doesn't makes much sense, or, at least add $uri/ so index directive will work.
try_files $uri $uri/ =404;
But my guess is, you need to move this try_files into location ~ \.php$, this will make sure that php-file exsists before pass it to PHP-FPM for processing. All other files will be served by nginx with proper use of index directive.
server {
listen 80;
index index.php;
server_name mydomain.com;
root /home/myusername/sites/mydomain.com/htdocs;
access_log /home/myusername/sites/mydomain.com/logs/access.log;
error_log /home/myusername/sites/mydomain.com/logs/error.log;
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}

Nginx forward www and non-www requests to one directory?

I have a MediaTemple server from which I serve many websites. I use nginx and have the follow config file. I am correctly forwarding all non-www traffic (ie, http://example.com) to the appropriate directory. However, all the www traffic is returning 404 because my config file is looking for /directory-structure/www.sitename.com instead of /directory-structure/sitename.com
How can I have both www and non-www requests go to one directory? Thanks.
server {
listen 80;
server_name _;
root /var/www/vhosts/$host/httpdocs/;
error_page 404 /;
location / {
try_files $uri $uri/ /index.php;
}
location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_script_name;
#fastcgi_pass php;
fastcgi_pass 127.0.0.1:9000;
}
location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
expires max;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
# this prevents hidden files (beginning with a period) from being served
location ~ /\. { access_log off; log_not_found off; deny all; }
}
Starting with version 0.7.40 Nginx accepts regular expressions in server_name and captures. Thus it's possible to extract a domain name (without www) and use this variable in root directive:
server_name ~^(?:www\.)?(.+)$ ;
root /var/www/vhosts/$1/httpdocs;
Starting with 0.8.25 it is possible to use named captures:
server_name ~^(?:www\.)?(?P<domain>.+)$ ;
root /var/www/vhosts/$domain/httpdocs;
Another syntax to define named captures is (?<domain>.+) (PCRE version 7.0 and later). More on PCRE versions here
Try this and add the following in the above server config:
if ($host = "www.example.com") {
rewrite (.*) http://example.org$1;
}
What happens here, we are instructin nginx to serve the pages as http://example.com even though the browser URL reads http://www.example.com - I hope this works.
UPDATE
Try this for a generic version:
if ($host ~* "www.(.*)") {
rewrite ^ http://$1$request_uri?;
}
Given the potential issues with if as linked to in RakeshS's answer's comments, as well as the fact that RakashS's answer didn't work for me anyway, here's a solution that should be safer and worked for me with Nginx 1.0.14.
Add an additional server entry for each one of your server sections that does a rewrite:
server {
server_name www.yourwebsite.com;
rewrite ^ $scheme://yourwebsite.com$request_uri permanent;
}

nginx rewrite mystery - duplicating hostname and losing https

I am replacing lighttpd with nginx on my development server. I got it working with PHP and SSL, but I'm stumped by what should be a simple rewrite. I need to rewrite URLs from
http[s]://dev.foo.com/signup/123456
to
http[s]://dev.foo.com/signup/index.php?attcode=123456
The rule I am using is:
rewrite ^/signup/([0-9]+)$ /signup/index.php?attycode=$1 last;
I have tried numerous variations on this, moved it around, put it inside a location block. What happens is the URL is rewritten to:
http://dev.foo.com/dev.foo.com/signup/123456
The hostname is inserted, and it seems to always lose https and go to http.
My nginx.com server section is below. I have read and re-read the nginx docs (as they are) and searched the nginx mailing list, but nothing I've tried has solved this problem.
Ubuntu 8.0.4 LTS in case that matters.
Thanks.
server {
listen 80;
listen 443 default ssl;
server_name dev.foo.com dev.bar.com localhost;
root /var/www/foo;
index index.php index.html;
# ssl cert stuff omitted
charset utf-8;
access_log /var/log/www/dev.access.log main;
location ~ /\. {
deny all;
}
location ~* ^.+\.(inc|tpl|sql|ini|bak|sh|cgi)$ {
deny all;
}
location ~* ^/(scripts|tmp|sql)/ {
deny all;
}
rewrite ^/robots.txt$ /robots_nocrawl.txt break;
rewrite ^/signup/([0-9]+)$ /signup/index.php?attycode=$1 last;
location / {
try_files $uri $uri/ /error_404.php;
}
location ~ \.php$ {
fastcgi_pass localhost:51115;
fastcgi_index index.php;
fastcgi_intercept_errors on;
include fastcgi_params;
fastcgi_param SERVER_NAME $http_host;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
error_page 404 /error_404.php;
}
Don't put HTTP and HTTPS in the same server block. Separate them into two almost-identical server blocks, one for HTTP and one for HTTPS. Otherwise you will confuse all kinds of Nginx internals.

Resources