Nginx serving static files always falls back to error - nginx

I have this configuration:
server {
server_name frontend-ui.something.com;
root /var/www/frontend_ui;
index index.html;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
}
located in /etc/nginx/sites-available and is symlinked in /etc/nginx/sites-enabled. Whenever I try to access it, it falls back to nginx 404 page. Just to be sure that the configuration and nginx are being reached I changed the config to fall back to 500 instead, and it did. So, my conclusion is that the config is reached, but the static files are not served/read.
The static files:
root#my-server:/var/www# ls -l frontend-ui/
total 8
drwxrwxr-x 2 bamboo bamboo 4096 Feb 28 11:59 build
-rw-rw-r-- 1 bamboo bamboo 392 Feb 28 11:59 index.html
The funny thing is that I have one more page being served with the exactly same configuration and permissions, but just with different subdomain name, and it works fine.

Related

Nginx adding new site's subdomain takes no effect

I have a VPS on digitalocean that works great with five subdomains. But when I decided to add 6th (RC), it doesn't work. In order not to make mistakes, I made the following:
duplicated existing (working) /var/www folder and renamed into rc
changed rights to this folder sudo chmod -R www-data:www-data rc
duplicated working config in etc/nginx/sites-available and renamed it to rc
changed server_name and root rooting there. So, it looks like so:
server {
listen 80;
listen [::]:80;
charset UTF-8;
server_name rc.myserver.com;
root /var/www/rc;
index index.html;
location ~ /\. {
deny all;
}
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.2-fpm.sock;
}
}
created symlink with ln -s /etc/nginx/sites-available/rc /etc/nginx/sites-enabled/rc
restarted nginx: sudo service nginx restart
Now my /etc/nginx/sites-enabled/ folder looks so:
lrwxrwxrwx 1 root root 31 Jul 21 2019 html -> /etc/nginx/sites-available/html
lrwxrwxrwx 1 root root 31 Jul 19 2019 hunt -> /etc/nginx/sites-available/hunt
lrwxrwxrwx 1 root root 32 Dec 2 16:43 monit -> /etc/nginx/sites-available/monit
lrwxrwxrwx 1 root root 29 Feb 1 13:57 rc -> /etc/nginx/sites-available/rc
lrwxrwxrwx 1 root root 31 Jul 21 2019 rent -> /etc/nginx/sites-available/rent
lrwxrwxrwx 1 root root 32 Jul 20 2019 tools -> /etc/nginx/sites-available/tools
sudo netstat -plutn | grep nginx shows:
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 29155/nginx: master
tcp6 0 0 :::80 :::* LISTEN 29155/nginx: master
My nginx.conf has this code lines active:
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
/var/log/nginx/error.log and /var/log/nginx/access.log didn't show any problems.
But when I try to get rc.myserver.com - I get "Failed to open the page" safari message:
Safari can’t open the page “http://rc.myserver.com” because Safari can’t find the server “rc.myserver.com.”
What's the problem can be with?
Did you point your subdomain to the Droplet ip-address ?
first thing you've to do is to point your subdomains to the single ip address via your DNS provider (A, CNAME).
I think that's why you are getting error:
Safari can’t open the page “http://rc.myserver.com” because Safari can’t find the server “rc.myserver.com”.
point rc.yourserver.com to Droplet IP address :)

nginx webdav upload file premissions

I am using nginx with http webdav module to upload files and delete files to server I am able to upload files successfully and delete but the issue I am facing is that files it uploads does not have execute permissions
-rw-rw-rw- 1 nginx nginx 1583670 Apr 19 17:20 startup.jpg
where as the folder it creates have all the permissions
drwxrwxrwx 2 nginx nginx 4096 Apr 19 16:27 s
I tried adding rwx to the nginx config but I get error it works fine with rw but with x it gives error
Apr 19 17:23:20 CDNSTORE nginx[18386]: nginx: [emerg] invalid value "group:rwx" in /etc/nginx/conf.d/webdav.conf:11
Following is my nginx config
server {
listen 80;
server_name localhost;
root /home/webdav/files;
client_body_temp_path /home/webdav/tmp;
location / {
dav_methods PUT DELETE MKCOL COPY MOVE;
# dav_ext_methods PROPFIND OPTIONS;
create_full_put_path on;
dav_access user:rw group:rwx all:rwx;
autoindex on;
client_max_body_size 1G; # File size limit for new files
auth_basic "closed site";
auth_basic_user_file /home/webdav/.htpasswd;
}
}
I want the images files to be accessible from url for that need to set -rwxr-xr-x permissions .
I dont want to use cron and shell to set permission because I need to create folders and subfolder on the fly and folder needs permission so webdav can delete so looking for a solution that nginx set permission when uploading the file

Serve static content through subdomain in nginx

I have some slate docs as website and would like to serve them on the internal server, through a subdomain as follows: internal-docs.mysite.com. For the record, accessing mysite.com shows the "nginx is running propertly" page.
I've created a config file with following path and name: /etc/nginx/sites-available/internal-docs.mysite.com:
server {
listen 80;
server_name internal-docs.mysite.com;
root /var/www/docs-internal;
index index.html;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
}
And of course, I've put the files in /var/www/docs-internal. And then I made a symlink to the uppershown config file in the /etc/nginx/sites-enabled dir:
internal-docs.mysite.com -> ../sites-available/internal-docs.mysite.com
Then I reload nginx -s reload but "this site can't be reached" error is what I get when accessing the URL.
The setup and configuration look correct to me (according to the guidelines I've followed), so that's why I'm in a dead end, sort of...
It seems you forgot the Listen directive. Try the following:
server {
listen 80;
server_name internal-docs.mysite.com;
root /var/www/docs-internal;
index index.html;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
}
If that does not work, check:
That Nginx user has read permission to the site content. For example if your Nginx user is www and you have root access, do the following:
# su www
$ cat /var/www/docs-internal/index.html
If that fails, ensure the location has correct ownership and permissions. Note that for a user to be able to browser a directory, that directory must have the execute bit for that user or user group.
That Nginx user has read permission on file ../sites-available/internal-docs.mysite.com. For example if your Nginx user is www and you have root access, do the following:
# su www
$ cat /etc/nginx/sites-available/internal-docs.mysite.com
If that fails, ensure that the config files have correct ownership. Note: normally Nginx master process is run by root, and that process spawns sub-processes run as Nginx user, so permissions on config files are unlikely to be the problem.
That maybe your config file name should end with ".conf" (on my server I have the following line: include conf.d/*.conf; so it will NOT load any conf file ending with ".com".
That Nginx tries to load files in ../sites-available/ in its main config file. Maybe it does not and looks instead in the conf.d directory (the default).
That you can do a ping and nslookup on the subdomain. If you cannot, then you have to fix that first (DNS, firewall...).
For the sake of others - the configuration I wrote was correct, and my problem was in 2 things:
I had to remove the listen 80 directive, since there is another configuration file already, that specifies that nginx should listen on port 80. One should not tell nginx twice to listen on the same port, even if it's in two separate configuration files
Permissions on the /var/www/docs-internal folder. Opening a folder requires x (execute) permissions, while opening a file requires r (read) perm. I had to provide the according permissions to all the folders in this hierarchy, so that the content could be open globally (from everyone), which is basically accessing it from the browser.

Nginx configuration, folder permissions and lets-encrypt

I am trying to use certbot and letsencrypt on my Ubuntu 16.0.4 server, so I can install a mail server.
I am running certbot like this:
sudo /opt/letsencrypt/certbot-auto certonly --agree-tos --webroot -w
/path/to/www/example -d example.com -d www.example.com
I get the following output from certbot (snippet shown below):
Domain: www.example.com
Type: unauthorized
Detail: Invalid response from
http://www.example.com/.well-known/acme-challenge/QEZwFgUGOJqqXHcLmTmkr5z83dbH3QlrIUk1S3JI_cg:
"<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>"
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address.
This is what my directory structure looks like:
root#yourbox:/path/to/www/example$ ls -la
total 12
drwxr-xr-x 3 example root 4096 Nov 1 10:17 .
drwxr-xr-x 5 root webapps 4096 Nov 1 10:13 ..
drwxr-xr-x 2 root root 4096 Nov 1 10:36 .well-known
root#yourbox:/path/to/www/example$
root#yourbox:/path/to/www/example$ cd .well-known/
root#yourbox:/path/to/www/example/.well-known$ ls -la
total 8
drwxr-xr-x 2 root root 4096 Nov 1 10:36 .
drwxr-xr-x 3 example root 4096 Nov 1 10:17 ..
root#yourbox:/path/to/www/example/.well-known$
From above, I can see that the challenge file does not exist - (presumably?) because, it looks like the certbot is unable to write to the folder.
However, I first needed to check that nginx was set up correctly, and that it was serving files from folders starting with a period.
This is the configuration file for nginx for the website (/etc/nginx/sites-available/example):
server {
# Allow access to the letsencrypt ACME Challenge
location ~ /\.well-known\/acme-challenge {
allow all;
}
}
I manually created a testfile (sudo touch /path/to/www/example/fake) and gave it the correct permissions:
root#yourbox:/path/to/www/example/.well-known/acme-challenge$ ls -l
total 0
-rw-r--r-- 1 example webapps 0 Nov 1 10:45 fake
I then tried to access http://www.example.com/.well-known/acme-challenge/fake from a browser - and got a 404 error.
This means I have two errors:
Nginx is not correctly setup to serve files from the .well-known/acme-challenge folder
The file permissions in the /path/to/www/example folder are wrong, so certbot can't write its automatically generated files to the .well-known/acme-challenge folder.
How may I fix these issues?
Your Nginx config file has no config to make your /path/to/www/example/ directory web accessible.
Here's a simple configuration which will put your site live and allow LetsEncyrpt to create a valid certificate. Bare in mind port 80 will need to be accessible.
server {
listen 80;
server_name www.example.co.uk example.co.uk;
root /path/to/www/example;
access_log /var/log/nginx/example.co.uk.log;
error_log /var/log/nginx/example.co.uk.log;
index index.html index.htm index.php;
location ~ /\.well-known\/acme-challenge {
allow all;
}
location / {
try_files $uri $uri/index.html $uri.html =404;
}
}
Change your server_name accordingly, or use your /etc/hosts file to configure a local domain.
I had the same problem which was caused by the following line:
location ~ /\. {
deny all;
}
i added the following ABOVE the line mentioned above this:
location ~ /\.well-known\/acme-challenge {
allow all;
}

500 Server error with Plesk, NGINX, PHP-FPM when removing URL trailing slash

I'm running into an extremely bizarre error while running a WordPress site.
WordPress has permalinks turned on. The 500 server error occurs when you REMOVE the trailing slash (/) from the URL. For example:
www.site.com/about/ -> works fine.
www.site.com/about -> throws a 500 server error.
The error log show the following:
[Tue Sep 24 00:44:58 2013] [warn] [client 75.52.190.1] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server
[Tue Sep 24 00:44:58 2013] [error] [client 75.52.190.1] Premature end of script headers: index.php
Wordpress debug log is active, but no errors or warnings are being generated.
Other points to note:
The server has multiple domains managed under Plesk 11.5.
Only one of the domains suffers from this issue.
I compared the config vhost.conf files located in /var/www/system/domain/etc/ to another wordpress domain that is not having this issue. Everything is identical.
I also tried removing all of the wordpress files and uploaded a completely fresh copy. The problem still occurs, even with a fresh copy of WordPress and no plugins, templates, or anything else.
One last item that I noticed. My domain specific vhost.conf has the following info:
location ~ /$ {
index index.php index.cgi index.pl index.html index.xhtml index.htm index.shtml;
try_files $uri $uri/ /index.php?$args;
}
That seems to be looking for anything with a /. Should I remove the / or add a similar block? The only reason I haven't tried is because none of the domains suffer from this issue. My next course of action would be to download all domain conf files and diff them against the domain with the error. I'd rather not go down that path if possible.
Thanks!
You need to remove the $ from the location block, because this location only matches URL's that end with a /, and since you won't need regex then you can remove ~ too, so the final result is
location / {
# your rewrites and try_files
}
Strange, without
location ~ /$ {
try_files $uri /wordpress/index.php?$args;
}
I've got 404 error for permalinks. And with it everything working. Maybe it will helps someone.
The final, working code for me is as follows:
location ~ / {
index index.php index.cgi index.pl index.html index.xhtml index.htm index.shtml;
try_files $uri $uri/ /index.php?$args;
}

Resources