I have an NGINX web server with two domains and it also runs phpMyAdmin.
phpMyAdmin is working fine and I access it through the below non-https url:
public-ip-address/phpMyAdmin
This is how the symbolic link was setup:
sudo ln -s /usr/share/phpmyadmin/ /var/www/html
Is there a way I can point phpMyAdmin to a website's subdirectory?
For example, I would like to access the phpMyAdmin login page by accessing the following URL:
domain1.com/phpMyAdmin/
How can I achieve this? domain1.com has https enabled. So it would also secure my phpMyAdmin login.
The server block is the same as the default block for NGINX. I have created a new config file by copying it to domain.com in the /etc/NGINX/sites-available folder.
The only changes are in the server and root path tags. Rest everything is default.
server domain1.com www.domain1.com;
root /var/www/domain1.com/html/
I am using certbot for Let's Encrypt SSL certificates. My server block config is shared below:
# Server Block Config for domain1.com
server {
root /var/www/domain1.com/html;
# Add index.php to the list if you are using PHP
index index.php index.html index.htm index.nginx-debian.html;
server_name domain1.com www.domain1.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
# try_files $uri $uri/ =404;
try_files $uri $uri/ /index.php?q=$uri&$args;
}
# pass PHP scripts to FastCGI server
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/domain1.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/domain1.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.domain1.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = domain1.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name domain1.com www.domain1.com;
return 404; # managed by Certbot
}
Contents of /etc/nginx/snippets/fastcgi-php.conf:
# regex to split $uri to $fastcgi_script_name and $fastcgi_path
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# Check that the PHP script exists before passing it
try_files $fastcgi_script_name =404;
# Bypass the fact that try_files resets $fastcgi_path_info
# see: http://trac.nginx.org/nginx/ticket/321
set $path_info $fastcgi_path_info;
fastcgi_param PATH_INFO $path_info;
fastcgi_index index.php;
include fastcgi.conf;
Here is the location block that should work for you (at least the similar config works for me):
location ~* ^/phpmyadmin(?<pmauri>/.*)? {
alias /usr/share/phpmyadmin/;
index index.php;
try_files $pmauri $pmauri/ =404;
location ~ \.php$ {
include fastcgi.conf;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$pmauri;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
}
Place it before the default PHP handler location block, or the default PHP handler block will take precedence and this configuration won't work!
You can simply add another symlink to the domain1.com root, while keeping everything else same, like you did for the default domain.
sudo ln -s /usr/share/phpmyadmin/ /var/www/domain1.com/html
Am I missing something?
I came to this thread while looking for a solution to another problem (Nginx alias breaks due to try_files $uri alias bug) but since you are already using symlink to phpmyadmin for the site you access through IP, you can do the same for any domain.
Related
I have setup an NGINX server with multiple server blocks. I have configured 2 simple websites to run on the same server. They point to two different root paths specified in the respective server block file.
I have updated the DNS details for both the domains:
Name Servers: updated details as per registrar
A records: updated same static IP address
However, when I test with domain2.com in the browser, it redirects to domain1.com. What am I doing wrong? I have already restarted the nginx server using:
sudo systemctl restart nginx
This is how my server block looks like:
# Default server configuration
#
server {
root /var/www/domain1.com/html;
# Add index.php to the list if you are using PHP
index index.php index.html index.htm index.nginx-debian.html;
server_name domain1.com www.domain1.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
# try_files $uri $uri/ =404;
try_files $uri $uri/ /index.php?q=$uri&$args;
}
# pass PHP scripts to FastCGI server
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/domain1.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/domain1.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.domain1.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = domain1.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name domain1.com www.domain1.com;
return 404; # managed by Certbot
}
This is my second server block file:
# Default server configuration
#
server {
listen 80;
listen [::]:80;
root /var/www/domain2.com/html;
# Add index.php to the list if you are using PHP
index index.php index.html index.htm index.nginx-debian.html;
server_name domain2.com www.domain2.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
# try_files $uri $uri/ =404;
try_files $uri $uri/ /index.php?q=$uri&$args;
}
# pass PHP scripts to FastCGI server
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
Your server block configuration files are perfect. Do not change anything. Let me explain what is happening. Your server block indicates that you installed certbot, probably for deploying Lets Encrypt SSL certificates for domain1.com whereas the server block for domain2.com does not have any entry managed by certbot yet. This probably means that you are using ssl certificates for domain1.com but not for domain2.com.
At the time of configuring SSL certificates for domain1.com, you probably selected the option of redirecting all traffic to SSL. Whats happening is that when the dns servers send a request for domain2.com to your web server, it expects an SSL certificate for that domain but finds none. It then redirects all traffic to the existing domain which has a valid SSL certificate installed.
However, the domain name on the certificate (domain1.com) shall not match the information passed on by the DNS server for domain2.com. Hence, it shall, in all likelihood, throw an error or warning message regarding invalid SSL certificates.
Solution: Install a new SSL certificate for domain2.com just like you did for domain1.com and everything should work fine.
Let me know if it works.
I want to run both WordPress and YOURLS on one domain which is configured by a NGINX server block (not the default site). Since both need to handle URLs differently, they need different try_files directives. WordPress sits on the root of the domain (domain.tld), while YOURLS is being installed to the /g/ directory. Despite the two location rules, I get 404s on any links generated by YOURLS (e.g. domain.tld/g/linkname, all are redirects to external URLs), though I can access the admin backend.
As far as I read, declaring to location rules (one for /g/, and one for /) should suffice in order to let NGINX handle the direct and the /g/ URLS differently - is there something in wrong in my thinking?
The try_files rules are correct and do work well on other single-application server block (WordPress as well as YOURLS on installs on separate server blocks).
The server block definition config looks like this:
server {
listen [::]:80;
listen 80;
server_name domain.tld www.domain.tld;
return 301 https://domain.tld$request_uri;
}
server {
listen [::]:443 ssl;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/domain.tld/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain.tld/privkey.pem;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
root /var/www/html/domain.tld;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html index.php;
server_name domain.tld www.domain.tld;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.2-fpm.sock;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors off;
}
location /g/ {
try_files $uri $uri/ /yourls-loader.php$is_args$args;
expires 14d;
add_header Cache-Control 'public';
}
location / {
try_files $uri $uri/ /index.php?$args;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}
The problem with the location /g/ try_files directive is that the path to the YOURLS loader isn't correct. If the URL handler (yourls-loader.php) lies within the /g directory, the path to it has to be changed to include the /g directory:
try_files $uri $uri/ /g/yourls-loader.php$is_args$args;
The location rule does not imply that each path is handled from that location as well, but rather from the root path given above.
I have tried to setup nginx on my server but I am unable to create multiple server blocks.
I have created two files : xxx.com and yyy.com in the sites-available folder and symlinked it in the sites-enabled folder. But both domains point to only 1 site. Like opening xxx.com opens yyy.com
For site xxx.com and yyy.com (replace xxx.com with yyy.com), I have following config file in sites-available :
server {
listen 80;
server_name xxx.com www.xxx.com;
root /var/www/xxx.com/public_html;
index index.html index.php index.htm index.nginx-debian.html;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.php?$args =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
fastcgi_read_timeout 300;
}
}
for default :
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
#listen 443 ssl http2 default_server;
#listen [::]:443 ssl http2 default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
#include snippets/self-signed.conf;
#include snippets/ssl-params.conf;
#root /var/www/public_html;
# Add index.php to the list if you are using PHP
index index.html index.php index.htm index.nginx-debian.html;
server_name _;
#location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
# try_files $uri $uri/ /index.php?$args;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php7.0-cgi alone:
# fastcgi_pass 127.0.0.1:9000;
# # With php7.0-fpm:
# fastcgi_pass unix:/run/php/php7.0-fpm.sock;
# fastcgi_read_timeout 300;
}
}
I have also added entries for thre respective domain in the /etc/hosts and also set nginx.conf server_names_hash_bucket_size 64;.
What could I be missing?
I am having a bit of an issue at the moment with the nginx. No matter what config I make, I will get 404 page.
I am following instructions for setting up nginx for laravel.
Folder where the project is located is /var/www/smuvajse, and main index.php is located into public folder (full path would be /var/www/smuvajse/public)
This is my config file:
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/smuvajse/public;
# Add index.php to the list if you are using PHP
index index.php index.html index.htm index.nginx-debian.html;
server_name ip_of_the_instac;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php7.0-cgi alone:
# fastcgi_pass 127.0.0.1:9000;
# # With php7.0-fpm:
# fastcgi_pass unix:/run/php/php7.0-fpm.sock;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
change try_files $uri $uri/ =404; to try_files $uri $uri/ /index.php?q=$uri&$args; and reload nginx and it should solve the issue.
I have wordpress working well in mysite.com
but YOURLS which is installed in mysite.com/u is not working, when I click on any shortened link I get a 404 error (wordpress).
However, I get YOURLS to work by adding this to nginx.conf
location /u { try_files $uri $uri/ /u/yourls-loader.php;
But then WordPress links break.
Here is my default nginx.conf
I know the fix is to add this try_files $uri $uri/ /u/yourls-loader.php; somewhere in nginx.conf , but where to put it without breaking wordpress.?
=================== Update 1 =========================
I got this partially working. with same config, but I noticed that wordpress links that start with u doesn't work ex: http://example.com/understand-math instead it redirect to Error 403 - Forbidden
???
================ update 2 ============
ok I fixed it by just adding another slash / to location /u/ instead of location /u
YOURLs NGINX CONFIGURATION
server {
# Listen IPv4 & v6
listen 80;
listen [::]:80;
# Optional SSL stuff
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate example.com.crt;
ssl_certificate_key example.com.key;
# Server names
server_name example.com www.example.com;
# Root directory (NEEDS CONFIGURATION)
root /path/to/files;
# Rewrites
location / {
# Try files, then folders, then yourls-loader.php
# --- The most important line ---
try_files $uri $uri/ /yourls-loader.php;
# PHP engine
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php5-fpm.sock; # Can be different
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
}
GITHUB DOCUMENTATION