I am using an Ubuntu 18.04.5 LTS webserver that is configured with nginx. The server is running and I can access any file on my mainpage example.com. Now I want to install matomo on the server. However, I can only manage to access the installation via my root url www.example.com. Whenever I try to move the Matomo-access to a subpage e.g. example.com/matomo/ my sever sends a 404 instead. I think I have made a mistake in creating a configuration setup for calling subpages, but I cannot figure out what went wrong. I am new to nginx and have spent the last 2 days testing for a solution. Any help would be highly appreciated. Please find my server.conf, as well as the matomo.conf below.
My server config-file is as follows:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name www.example.com;
set $base /var/www/example.com;
root $base/public;
# SSL
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
# security
include nginxconfig.io/security.conf;
# logging
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log warn;
# index.php
index index.php;
# index.php fallback
location / {
try_files $uri $uri/ /index.php?$query_string;
}
# handle .php
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
include nginxconfig.io/php_fastcgi.conf;
}
}
My matomo.conf:
server {
listen 443 ssl http2;
server_name www.example.com/subpage example.com/subpage;
access_log /var/log/nginx/matomo.access.log;
error_log /var/log/nginx/matomo.error.log;
## SSL
ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem;
add_header Referrer-Policy origin always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
root /var/www/example.com/subpage/matomo/; # path to matomo instance
index index.php;
## only allow accessing the following php files
location ~ ^/(index|matomo|piwik|js/index|plugins/HeatmapSessionRecording/configs)\.php {
include snippets/fastcgi-php.conf;
fastcgi_param HTTP_PROXY "";
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
## deny access to all other .php files
location ~* ^.+\.php$ {
deny all;
return 403;
}
## serve all other files normally
location / {
#try_files $uri $uri/ =404;
}
## disable all access to the following directories
location ~ ^/(config|tmp|core|lang) {
deny all;
return 403; # replace with 404 to not show these directories exist
}
location ~ /\.ht {
deny all;
return 403;
}
location ~ js/container_.*_preview\.js$ {
expires off;
add_header Cache-Control 'private, no-cache, no-store';
}
location ~ \.(gif|ico|jpg|png|svg|js|css|htm|html|mp3|mp4|wav|ogg|avi|ttf|eot|woff|woff2|json)$ {
allow all;
## Cache images,CSS,JS and webfonts for an hour
## Increasing the duration may improve the load-time, but may cause old files to show after an Matomo upgrade
expires 1h;
add_header Pragma public;
add_header Cache-Control "public";
}
location ~ ^/(libs|vendor|plugins|misc/user|node_modules) {
deny all;
return 403;
}
## properly display textfiles in root directory
location ~/(.*\.md|LEGALNOTICE|LICENSE) {
default_type text/plain;
}
}
# vim: filetype=nginx
Thanks for the quick reply.
After further looking for solutions I have now done both: I created a subdomain and launched matomo under a subpage by adding the following code to the subdomain.config.
location /matomo {
root /var/www/subdomain.example.com/matomo;
}
I found this to be a great tutorial on creating subdomains.
Related
Having some trouble getting a 2nd nginx block live - can't get the domain to point to the correct root folder and the Let's Encrypt Acme challenge is failing (probably related problems).
The server is Ubuntu 18.04 and I'm using it as a sandbox to work on sites.
Here is the sites available conf for the site that is just redirecting to the nginx default page
server {
root /var/www/boothslop.online;
index index.php index.html index.htm index.nginx-debian.html
servername boothslop.online www.boothslop.online;
location = /favicon.ico { lognotfound off; accesslog off; }
location = /robots.txt { lognotfound off; accesslog off; allow all; }
location ~* .(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
lognotfound off;
}
location / {
#tryfiles $uri $uri/ =404;
tryfiles $uri $uri/ /index.php$isargs$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
Here is the sites available conf for the site that is working correctly both for lets encrypt and finding the correct root folder when the domain is accessed.
server {
root /var/www/webtest.tech;
index index.php index.html index.htm index.nginx-debian.html
servername webtest.tech www.webtest.tech;
location = /favicon.ico { lognotfound off; accesslog off; }
location = /robots.txt { lognotfound off; accesslog off; allow all; }
location ~* .(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
lognotfound off;
}
location / {
#tryfiles $uri $uri/ =404;
tryfiles $uri $uri/ /index.php$isargs$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
location ~ /\.ht {
deny all;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/webtest.tech/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/webtest.tech/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.webtest.tech) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = webtest.tech) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name webtest.tech www.webtest.tech;
}
This is the error I get from the Acme challenge
Domain: www.boothslop.online
Type: unauthorized
Detail: Invalid response from
http://www.boothslop.online/.well-known/acme-challenge/G13Ou7X8U-KMQVvT_ExNvAfK5cF-jHkobGp7hyqw8ac
[192.34.60.43]: "<html>\r\n<head><title>404 Not
Found</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>404
Not Found</h1></center>\r\n<hr><center>"
Thanks very much in advance!
The error 404 Not Found means it couldn't find the following file:
http://www.boothslop.online/.well-known/acme-challenge/G13Ou7X8U-KMQVvT_ExNvAfK5cF-jHkobGp7hyqw8ac
Have you added the said file inside your public directory?
When doing responding to acme challenges, you're asked to add a text file in that address, along with what the content should be, so that it can run verification
Run the command again and it'll ask you to save a file with a definite file name and contents. Create the folders in the following structure:
<public directory>
- .well-known
- acme-challenge
Add the text file inside the acme folder, then proceed with the verification.
Hope that helps!
I installed a new WordPress blog thru Forge onto the same server as a Laravel 5.4 app. I put the blog in blog.example.com for simplicity sake, but I don't have any DNS actually pointing to the subdomain. Instead, I want to have example.com/blog pointing to my WordPress installation.
I then modified the nginx conf file for the Laravel site to look like this:
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/example.com/before/*;
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com;
root /home/forge/example.com/current/public;
# FORGE SSL (DO NOT REMOVE!)
ssl_certificate /etc/nginx/ssl/example.com/230815/server.crt;
ssl_certificate_key /etc/nginx/ssl/example.com/230815/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'SHA-HASH-HERE';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparams.pem;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/example.com/server/*;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/example.com-error.log error;
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_read_timeout 600;
fastcgi_send_timeout 600;
fastcgi_connect_timeout 600;
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
}
location ~ /\.(?!well-known).* {
deny all;
}
location /blog {
root /home/forge/blog.example.com/public;
index index.php index.html index.htm;
try_files $uri $uri/ /index.php?q=$uri&$args;
access_log /var/log/nginx/blog.example.com-access.log;
error_log /var/log/nginx/blog.example.com-error.log error;
}
}
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/example.com/after/*;
I restarted nginx expecting to see the WP installation when I visit example.com/blog but instead I only see a 404 error from the Laravel app.
What is wrong with my approach here?
Assuming your index file is actually .../blog.example.com/public/index.* then inside of your location /blog I believe you will want to change root to alias. Try that and see if it helps. The documents below go into more depth. If you do that, you may want to get rid of the try_files for that location. I would also browse the link at the bottom from NGINX, and check your config against those suggestions.
NGINX Config
NGINX Alias Docs
I am trying to redirect all http and http calls to https:// www.example.com using nginx config file.
The problem is that the redirect do not work for http://example.com ->https://www.example.com
All others work.
server {
listen 80;
server_name example.com;
return 301 https://www.example.com$request_uri;
}
server {
listen 443;
ssl on;
ssl_certificate /root/www.example.com.crt;
ssl_certificate_key /root/example.com.key;
server_name www.example.com;
add_header Strict-Transport-Security "max-age=31536000";
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
root /var/www/example.com/htdocs;
index index.php index.html;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ .php$ {
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
}
location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
expires 30d;
add_header Pragma public;
add_header Cache-Control "public";
}
}
You may try these, I guess its about using the exact statements
https://www.rosehosting.com/blog/how-to-redirect-http-traffic-to-https-in-nginx-and-apache/
Is it possible to optimize/minimize the config posted below?
I feel that it should be possible to merge all the redirects into something more simple.
http:// & http://www & https://www > https://
Though I've had issues and settled.
I understand variables are not supported in NGINX config, so I have to manually define the log locations for example. Would there be a way to set a default location for all vhosts?
I use the same ssl-params.conf file for all vhosts. Can this be defaulted and disabled on a per-vhost basis?
# Redirect http:// & http://www to https://
server {
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}
# Redirect https://www to https://
server {
listen 443 ssl;
server_name www.example.com;
return 301 https://example.com/$request_uri;
}
# Main config
server {
listen 443 ssl;
server_name example.com;
# SSL config
include snippets/ssl-example.com.conf;
include snippets/ssl-params.conf;
# Error logs
access_log /srv/logs/nginx.access.example.com.log;
error_log srv/logs/nginx.error.example.com.log;
# Root dir
location / {
root /srv/example.com/_site/;
index index.php index.html index.htm;
try_files $uri $uri/ /index.php?$args;
}
# Caching
location ~ .php$ {
root /srv/example.com/_site/;
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
root /srv/example.com/_site/;
expires 365d;
}
location ~* \.(pdf)$ {
root /srv/example.com/_site/;
expires 30d;
}
# SSL
location /.well-known {
allow all;
}
}
I understand variables are not supported in NGINX config, so I have to manually define the log locations for example. Would there be a way to set a default location for all vhosts?
Yes, just define it in the http context of your config or stick with the default of your distro (e.g. /var/log/nginx/access.log).
I use the same ssl-params.conf file for all vhosts. Can this be defaulted and disabled on a per-vhost basis?
It works the other way around you enable it where you need it through the include directive.
Here is a shorter config (untested):
http {
error_log /srv/logs/nginx.error.example.com.log;
access_log /srv/logs/nginx.access.example.com.log;
index index.php index.html index.htm;
server {
listen 80;
listen 443 ssl;
server_name .example.com;
include snippets/ssl-example.com.conf;
include snippets/ssl-params.conf;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
server_name example.com;
root /srv/example.com/_site/;
include snippets/ssl-example.com.conf;
include snippets/ssl-params.conf;
location / {
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
try_files $uri =404;
}
location ~* \.(jpe?g|png|gif|ico|css|js)$ {
expires 365d;
}
location ~* \.(pdf)$ {
expires 30d;
}
try_files $uri $uri/ /index.php?$args;
}
location /.well-known {
allow all;
}
}
}
I've just switched from Apache to nginx and it still takes some getting used to (and a lot of learning).
I'm running a Pagekit website which has this configuration: https://gist.github.com/DarrylDias/be8955970f4b37fdd682
server {
listen 80;
listen [::]:80;
# SSL configuration
listen 443 ssl;
listen [::]:443 ssl;
ssl on;
ssl_certificate /etc/ssl/private/mydomain.com.crt;
ssl_certificate_key /etc/ssl/private/mydomain.com.private.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_client_certificate /etc/ssl/private/cloudflare.origin-pull-ca.pem;
ssl_verify_client on;
server_name mydomain.com www.mydomain.com;
root /home/vhosts/domains/mydomain.com/public/;
index index.php;
# Leverage browser caching of media files for 30 days
location ~* \.(?:ico|css|js|gif|jpe?g|png|ttf|woff)\$ {
access_log off;
expires 30d;
add_header Pragma public;
add_header Cache-Control "public, mustrevalidate, proxy-revalidate";
}
location / {
try_files $uri $uri/ /index.php?$args;
}
# Deny access to sensitive folders
location ~* /(app|packages|storage|tmp)/.*$ {
return 403;
}
# Deny access to files with the following extensions
location ~* \.(db|json|lock|dist|md)$ {
return 403;
}
# Deny access to following files
location ~ /(config.php|pagekit|composer.lock|composer.json|LICENSE|\.htaccess) {
return 403;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php7-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_param HTTP_MOD_REWRITE On;
}
}
Unforunately, many (including me) have the issue that files with extensions such as js|css|jpg|<etc> are getting a 403 response because they're located inside either the app or packages directory.
I've attempted multiple regexes to try and give the location for these files a higher priority in nginx, but they seemed to have no effect.
How should this config file be changed in order to allow these kind of files, but still return a 403 on all other files inside those directories?
EDIT: the file URL's look like https://example.com/app/js/something.min.js?v=1921 perhaps it doesn't work because of the ?v=1921 ?
According to nginx's document:
nginx checks locations given by regular expression in the order listed in the configuration file
So first you need to move your last location to the top.
Then the regular expression that tries to match static files is also incorrect. The dollar sign "$" should match the end of path but it was escaped by a prior backslash "\" (so it actually matches a character "$"). Remove the backslash will fix your issue:
location ~* \.(?:ico|css|js|gif|jpe?g|png|ttf|woff)$ {
...
}