Nginx configuration, folder permissions and lets-encrypt - nginx

I am trying to use certbot and letsencrypt on my Ubuntu 16.0.4 server, so I can install a mail server.
I am running certbot like this:
sudo /opt/letsencrypt/certbot-auto certonly --agree-tos --webroot -w
/path/to/www/example -d example.com -d www.example.com
I get the following output from certbot (snippet shown below):
Domain: www.example.com
Type: unauthorized
Detail: Invalid response from
http://www.example.com/.well-known/acme-challenge/QEZwFgUGOJqqXHcLmTmkr5z83dbH3QlrIUk1S3JI_cg:
"<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>"
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address.
This is what my directory structure looks like:
root#yourbox:/path/to/www/example$ ls -la
total 12
drwxr-xr-x 3 example root 4096 Nov 1 10:17 .
drwxr-xr-x 5 root webapps 4096 Nov 1 10:13 ..
drwxr-xr-x 2 root root 4096 Nov 1 10:36 .well-known
root#yourbox:/path/to/www/example$
root#yourbox:/path/to/www/example$ cd .well-known/
root#yourbox:/path/to/www/example/.well-known$ ls -la
total 8
drwxr-xr-x 2 root root 4096 Nov 1 10:36 .
drwxr-xr-x 3 example root 4096 Nov 1 10:17 ..
root#yourbox:/path/to/www/example/.well-known$
From above, I can see that the challenge file does not exist - (presumably?) because, it looks like the certbot is unable to write to the folder.
However, I first needed to check that nginx was set up correctly, and that it was serving files from folders starting with a period.
This is the configuration file for nginx for the website (/etc/nginx/sites-available/example):
server {
# Allow access to the letsencrypt ACME Challenge
location ~ /\.well-known\/acme-challenge {
allow all;
}
}
I manually created a testfile (sudo touch /path/to/www/example/fake) and gave it the correct permissions:
root#yourbox:/path/to/www/example/.well-known/acme-challenge$ ls -l
total 0
-rw-r--r-- 1 example webapps 0 Nov 1 10:45 fake
I then tried to access http://www.example.com/.well-known/acme-challenge/fake from a browser - and got a 404 error.
This means I have two errors:
Nginx is not correctly setup to serve files from the .well-known/acme-challenge folder
The file permissions in the /path/to/www/example folder are wrong, so certbot can't write its automatically generated files to the .well-known/acme-challenge folder.
How may I fix these issues?

Your Nginx config file has no config to make your /path/to/www/example/ directory web accessible.
Here's a simple configuration which will put your site live and allow LetsEncyrpt to create a valid certificate. Bare in mind port 80 will need to be accessible.
server {
listen 80;
server_name www.example.co.uk example.co.uk;
root /path/to/www/example;
access_log /var/log/nginx/example.co.uk.log;
error_log /var/log/nginx/example.co.uk.log;
index index.html index.htm index.php;
location ~ /\.well-known\/acme-challenge {
allow all;
}
location / {
try_files $uri $uri/index.html $uri.html =404;
}
}
Change your server_name accordingly, or use your /etc/hosts file to configure a local domain.

I had the same problem which was caused by the following line:
location ~ /\. {
deny all;
}
i added the following ABOVE the line mentioned above this:
location ~ /\.well-known\/acme-challenge {
allow all;
}

Related

Nginx: How to setup multiple virtual hosts (server blocks) on different subdomains?

EDIT:
Ok, very strange still. It seems that it does not work on my main browser. In incognito browsers or just a completely new chrome window the sites now do work. I guess it has something to do with browser caching?
So I am hosting my website on Digital Ocean and I want to host multiple 'websites' on 1 droplet/server. By multiple websites, I mean different subdomains of my main domain. So I want to host admin.domain.com, dev.domain.com and test.domain.com on the same server. I followed this tutorial, but it is not working like expected. Currently, only 1 subdomain of the 3 is working...
What have I tried so far?
First of, I created 3 A records in my DNS all pointing to the same server_ip droplet on Digital Ocean.
I've created a different a different folder for each subdomain in the /var/www folder, each containing a html folder with a simple index.html file and some html:
The image above shows my /var/www folder.
I then used the following command sudo chmod -R 755 /var/www.
Next, I copied the default server block file and used this as the default for a new server block with the following command:
sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/admin.domain.nl
I changed the contents of the file in all 3 config files for the 3 subdomains to the following (obviously changing the root to the specific subdomain aswell as the server_name):
server {
listen 80;
listen [::]:80;
root /var/www/admin.domain.nl/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name admin.domain.nl;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
}
I then used the following command: sudo ln -s /etc/nginx/sites-available/dev.domain.nl /etc/nginx/sites-enabled/ 3 times for the 3 different subdomains to 'enable' the server blocks.
This is my sites-enabled folder:
I've had no syntax errors and thus restarted nginx with: sudo systemctl restart nginx.
The problem
Now, for some very odd reason I do not understand, only the admin.domain.nl site is working. The other 2 subdomains simply display: This site can’t be reached.
What am I missing here?
IN /etc/nginx/nginx.conf
http {
# ...........................others contents.....................................
# in bottom
server {
listen 80;
root /var/www/html/cmp/api;
server_name "cmpapi.localhost";
index index.html index.php;
location / {
try_files $uri $uri/ $uri.html $uri.php =404;
}
}
server {
listen 80;
root /var/www/html/cmp/frontend;
server_name "cmp.localhost";
index index.html index.php;
location / {
try_files $uri $uri/ $uri.html $uri.php =404;
}
}
}
Here my project name is cmp. here 2 project
one is react frontend
another is laravel api project
Here 2 folder created in
/var/www/html/cmp/api
This assigned to http://cmpapi.localhost (server_name)
/var/www/html/cmp/frontend
This assigned to http://cmp.localhost (server_name)
cmd
sudo find /var/www/html/cmp/frontend/ -type f -exec chmod 644 {} \;
sudo find /var/www/html/cmp/frontend/ -type d -exec chmod 755 {} \;
sudo find /var/www/html/cmp/api/ -type f -exec chmod 644 {} \;
sudo find /var/www/html/cmp/api/ -type d -exec chmod 755 {} \;
systemctl reload nginx
Browser
http://cmpapi.localhost
http://cmp.localhost

Nginx adding new site's subdomain takes no effect

I have a VPS on digitalocean that works great with five subdomains. But when I decided to add 6th (RC), it doesn't work. In order not to make mistakes, I made the following:
duplicated existing (working) /var/www folder and renamed into rc
changed rights to this folder sudo chmod -R www-data:www-data rc
duplicated working config in etc/nginx/sites-available and renamed it to rc
changed server_name and root rooting there. So, it looks like so:
server {
listen 80;
listen [::]:80;
charset UTF-8;
server_name rc.myserver.com;
root /var/www/rc;
index index.html;
location ~ /\. {
deny all;
}
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.2-fpm.sock;
}
}
created symlink with ln -s /etc/nginx/sites-available/rc /etc/nginx/sites-enabled/rc
restarted nginx: sudo service nginx restart
Now my /etc/nginx/sites-enabled/ folder looks so:
lrwxrwxrwx 1 root root 31 Jul 21 2019 html -> /etc/nginx/sites-available/html
lrwxrwxrwx 1 root root 31 Jul 19 2019 hunt -> /etc/nginx/sites-available/hunt
lrwxrwxrwx 1 root root 32 Dec 2 16:43 monit -> /etc/nginx/sites-available/monit
lrwxrwxrwx 1 root root 29 Feb 1 13:57 rc -> /etc/nginx/sites-available/rc
lrwxrwxrwx 1 root root 31 Jul 21 2019 rent -> /etc/nginx/sites-available/rent
lrwxrwxrwx 1 root root 32 Jul 20 2019 tools -> /etc/nginx/sites-available/tools
sudo netstat -plutn | grep nginx shows:
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 29155/nginx: master
tcp6 0 0 :::80 :::* LISTEN 29155/nginx: master
My nginx.conf has this code lines active:
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
/var/log/nginx/error.log and /var/log/nginx/access.log didn't show any problems.
But when I try to get rc.myserver.com - I get "Failed to open the page" safari message:
Safari can’t open the page “http://rc.myserver.com” because Safari can’t find the server “rc.myserver.com.”
What's the problem can be with?
Did you point your subdomain to the Droplet ip-address ?
first thing you've to do is to point your subdomains to the single ip address via your DNS provider (A, CNAME).
I think that's why you are getting error:
Safari can’t open the page “http://rc.myserver.com” because Safari can’t find the server “rc.myserver.com”.
point rc.yourserver.com to Droplet IP address :)

How can I get rid of this damn 403 error?

I've tried since hours with lots of solutions but cannot get rid of this 403 error on serving a static subdomain with NGINX.
I've tried chmod all my permissions in the directory to the static folder and editing the config file over and over.
NGINX serves beautifully my reverse proxied Node app but shuts down all the static subdomains that once were in the server.
Permissions:
dr-xr-xr-x root root /
drwxr-xr-x root root home
drwx--x--x ca****8sh nginx ca****8sh
lrwxrwxrwx ca****8sh ca****8sh www -> public_html
drwxr-x--- ca****8sh ca****8sh public_html
drwxr-xr-x nginx nginx residenza******.******ano.ch;
config file:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name residenza******.******ano.ch;
root /home/ca****8sh/www/residenza******.******ano.ch/;
location / {
#try_files $uri $uri/ =404;
index index.html;
#autoindex on;
#autoindex_exact_size off;
}
[continues with SSL setup]
I've also tried tweaking things around like enabling autoindex but to no avail.
I'm on despair, please help!
Check which user nginx is using in first line of nginx.conf. It should be either nginx or www-data. then run this command. Replace www-data with nginx if the user is nginx
chown -R www-data /home/ca****8sh/www/residenza******.******ano.ch/
If you are using SELinux like CentOS, run these commands too:
sudo setsebool -P httpd_can_network_connect on
chcon -Rt /home/ca****8sh/www/residenza******.******ano.ch/
I have handled by changing the global nginx user to a higher tier user. This is what was causing the permits issue.

nginx webdav upload file premissions

I am using nginx with http webdav module to upload files and delete files to server I am able to upload files successfully and delete but the issue I am facing is that files it uploads does not have execute permissions
-rw-rw-rw- 1 nginx nginx 1583670 Apr 19 17:20 startup.jpg
where as the folder it creates have all the permissions
drwxrwxrwx 2 nginx nginx 4096 Apr 19 16:27 s
I tried adding rwx to the nginx config but I get error it works fine with rw but with x it gives error
Apr 19 17:23:20 CDNSTORE nginx[18386]: nginx: [emerg] invalid value "group:rwx" in /etc/nginx/conf.d/webdav.conf:11
Following is my nginx config
server {
listen 80;
server_name localhost;
root /home/webdav/files;
client_body_temp_path /home/webdav/tmp;
location / {
dav_methods PUT DELETE MKCOL COPY MOVE;
# dav_ext_methods PROPFIND OPTIONS;
create_full_put_path on;
dav_access user:rw group:rwx all:rwx;
autoindex on;
client_max_body_size 1G; # File size limit for new files
auth_basic "closed site";
auth_basic_user_file /home/webdav/.htpasswd;
}
}
I want the images files to be accessible from url for that need to set -rwxr-xr-x permissions .
I dont want to use cron and shell to set permission because I need to create folders and subfolder on the fly and folder needs permission so webdav can delete so looking for a solution that nginx set permission when uploading the file

Let's Encrypt unauthorized 403 forbidden

On the server, Nginx is installed.
Let's Encrypt is working well with www.domain.com but is not working with static.domain.com
With PuTTY, when I enter :
sudo letsencrypt certonly -a webroot --webroot-path=/var/www/site/domain -d static.domain.com -d domain.com -d www.domain.com
I have the below issue :
Failed authorization procedure. static.domain.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://static.domain.com/.well-known/acme-challenge/c6zngeBwPq42KLXT2ovW-bVPOQ0OHuJ7Fw_FbfL8XfY: "<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>"
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: static.domain.com
Type: unauthorized
Detail: Invalid response from
http://static.domain.com/.well-known/acme-challenge/c6zngeBwPq42KLXT2ovW-bVPOQ0OHuJ7Fw_FbfL8XfY:
"<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>"
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address.
Somebody know what can be the issue?
I got an identical error message from certbot when I tried to install a certificate for the first time on my website.
Check the cause on the web server
I was using apache2, not nginx. I looked at the logs in /var/log/apache2/error.log for apache2 error messages associated with that 403 Forbidden event on my website and I found :
[Sun Aug 26 14:16:24.239964 2018] [core:error] [pid 12345] (13)Permission denied: [client 12.34.56.78:1234] AH00035: access to /.well-known/acme-challenge/5PShRrf3tR3wmaDw1LOKXhDOt9QwyX3EVZ13JklRJHs denied (filesystem path '/var/lib/letsencrypt/http_challenges') because search permissions are missing on a component of the path
Permissions and access problem
I googled this error message and found out that apache2 can't read the directory mentionned above (e.g. /var/lib/letsencrypt/http_challenges) because of incorrect permissions, such as:
$ sudo ls -la /var/lib/letsencrypt/
total 16
drwxr-x--- 4 root root 4096 Aug 26 14:31 .
drwxr-xr-x 72 root root 4096 Aug 18 00:48 ..
drwxr-x--- 27 root root 4096 Aug 26 14:26 backups
drwxr-xr-x 2 root root 4096 Aug 26 14:27 http_challenges
So, according to the above line with a dot (.) representing letsencrypt folder with permission rwxr-x---, no one except root user can read its content. To rectify permissions, I just did :
Solution
$ sudo chmod o+rx /var/lib/letsencrypt
which changes the above $ ls command output to :
$ ls -la /var/lib/letsencrypt/
total 16
drwxr-xr-x 4 root root 4096 Aug 26 14:31 .
drwxr-xr-x 72 root root 4096 Aug 18 00:48 ..
drwxr-x--- 27 root root 4096 Aug 26 14:26 backups
drwxr-xr-x 2 root root 4096 Aug 26 14:27 http_challenges
Now, the above line with a dot (.) representing letsencrypt directory indicates rwxr-xr-x, so that "other users" (like user www-data for apache2) can now read and go through letsencrypt directory.
Then certbot worked as expected.
In your server block, add:
# for LetsEncrypt
location ~ /.well-known {
allow all;
}
I guess you have another webroot for your sub domain and if so just need to specify that webroot. In your example you have the same webroot for both static.domain.com and domain.com.
from https://certbot.eff.org/docs/using.html
If you’re getting a certificate for many domains at once, the plugin
needs to know where each domain’s files are served from, which could
potentially be a separate directory for each domain. When requesting a
certificate for multiple domains, each domain will use the most
recently specified --webroot-path
certbot certonly --webroot -w /var/www/example/ -d www.example.com -d
example.com -w /var/www/other -d other.example.net -d
another.other.example.net
I came across a work around, since it is not the solution (not automatic), but it worked.
You can prove your domain ownership using DNS challenge, via The Certbot ;
sudo certbot -d domain.com --manual --preferred-challenges dns certonly
I had to remove the AAAA records for my domain as certbot was prefering IPV6. My webhost provider DNS had default AAAA records for www and # (root of domain).
After carefully examining the /var/log/letsencrypt/letsencrypt.log - down where it says "addressUsed", I saw that it was using an IPV6 address. In my case I don't have any website at www. or the root of my domain that are serviced by an IPV6 address so I removed the AAAA records and saw immediate relief to my problem. Due to dns propagation and record ttl, it may take longer for others to see relief.
certbot will try to connect to you using an IPV6 address if it was able to resolve one even though you're expecting the connection via IPV4 and that was the extent of my problems.
I suggested deleting the log so you have only fresh entries before continuing with the command - sudo rm /var/log/letsencrypt/legsencrypt.log - find the "addressUsed" and verify that it's an IPV4 address and not an IPV6 address. if its an IPV6 address, either forward that address at the gateway to your host and verify you're listening on IPV6 as well OR remove the AAAA records in DNS so that letsencrypt will connect to you using IPV4 address instead.
In case anyone still facing this issue, can try below and it worked for me :
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
alias /home/nginx/domains/domain.com/public/acme-challenge/;
}
In my case I denied access to security related files (/.htaccess, /.htpasswd, etc.) via
location ~ /\. {
deny all;
}
Which I changed to
location ~ /\.ht {
deny all;
}
I don't want to unnecessarily repeat things, but it seems there are quite some different situations that can cause a 403 at certificate renewal. For me, it had to do with a changed nginx config because of Wordpress / url rewriting. Using Virtualmin btw.
There is a link above in the comments that refers to an issue on Github. One guy explains brilliantly how the location matches in nginx work and gives a solution for the 403. Still, there might be other issues causing this too.
Thus, for me the solution was to include a location match for /.well-known/.
location ^~ /.well-known/ {
#limit_req [tighter per-ip settings here]; ## kicked this one out
access_log off;
log_not_found off;
#root /var/www/html; ## kicked this one out
autoindex off;
index index.html; # "no-such-file.txt",if expected protos don't need it
try_files $uri $uri/ =404;
}
I am no nginx expert at all, so I would encourage you to read the post and check which parameters are needed for your situation.

Resources