On the server, Nginx is installed.
Let's Encrypt is working well with www.domain.com but is not working with static.domain.com
With PuTTY, when I enter :
sudo letsencrypt certonly -a webroot --webroot-path=/var/www/site/domain -d static.domain.com -d domain.com -d www.domain.com
I have the below issue :
Failed authorization procedure. static.domain.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://static.domain.com/.well-known/acme-challenge/c6zngeBwPq42KLXT2ovW-bVPOQ0OHuJ7Fw_FbfL8XfY: "<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>"
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: static.domain.com
Type: unauthorized
Detail: Invalid response from
http://static.domain.com/.well-known/acme-challenge/c6zngeBwPq42KLXT2ovW-bVPOQ0OHuJ7Fw_FbfL8XfY:
"<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>"
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address.
Somebody know what can be the issue?
I got an identical error message from certbot when I tried to install a certificate for the first time on my website.
Check the cause on the web server
I was using apache2, not nginx. I looked at the logs in /var/log/apache2/error.log for apache2 error messages associated with that 403 Forbidden event on my website and I found :
[Sun Aug 26 14:16:24.239964 2018] [core:error] [pid 12345] (13)Permission denied: [client 12.34.56.78:1234] AH00035: access to /.well-known/acme-challenge/5PShRrf3tR3wmaDw1LOKXhDOt9QwyX3EVZ13JklRJHs denied (filesystem path '/var/lib/letsencrypt/http_challenges') because search permissions are missing on a component of the path
Permissions and access problem
I googled this error message and found out that apache2 can't read the directory mentionned above (e.g. /var/lib/letsencrypt/http_challenges) because of incorrect permissions, such as:
$ sudo ls -la /var/lib/letsencrypt/
total 16
drwxr-x--- 4 root root 4096 Aug 26 14:31 .
drwxr-xr-x 72 root root 4096 Aug 18 00:48 ..
drwxr-x--- 27 root root 4096 Aug 26 14:26 backups
drwxr-xr-x 2 root root 4096 Aug 26 14:27 http_challenges
So, according to the above line with a dot (.) representing letsencrypt folder with permission rwxr-x---, no one except root user can read its content. To rectify permissions, I just did :
Solution
$ sudo chmod o+rx /var/lib/letsencrypt
which changes the above $ ls command output to :
$ ls -la /var/lib/letsencrypt/
total 16
drwxr-xr-x 4 root root 4096 Aug 26 14:31 .
drwxr-xr-x 72 root root 4096 Aug 18 00:48 ..
drwxr-x--- 27 root root 4096 Aug 26 14:26 backups
drwxr-xr-x 2 root root 4096 Aug 26 14:27 http_challenges
Now, the above line with a dot (.) representing letsencrypt directory indicates rwxr-xr-x, so that "other users" (like user www-data for apache2) can now read and go through letsencrypt directory.
Then certbot worked as expected.
In your server block, add:
# for LetsEncrypt
location ~ /.well-known {
allow all;
}
I guess you have another webroot for your sub domain and if so just need to specify that webroot. In your example you have the same webroot for both static.domain.com and domain.com.
from https://certbot.eff.org/docs/using.html
If you’re getting a certificate for many domains at once, the plugin
needs to know where each domain’s files are served from, which could
potentially be a separate directory for each domain. When requesting a
certificate for multiple domains, each domain will use the most
recently specified --webroot-path
certbot certonly --webroot -w /var/www/example/ -d www.example.com -d
example.com -w /var/www/other -d other.example.net -d
another.other.example.net
I came across a work around, since it is not the solution (not automatic), but it worked.
You can prove your domain ownership using DNS challenge, via The Certbot ;
sudo certbot -d domain.com --manual --preferred-challenges dns certonly
I had to remove the AAAA records for my domain as certbot was prefering IPV6. My webhost provider DNS had default AAAA records for www and # (root of domain).
After carefully examining the /var/log/letsencrypt/letsencrypt.log - down where it says "addressUsed", I saw that it was using an IPV6 address. In my case I don't have any website at www. or the root of my domain that are serviced by an IPV6 address so I removed the AAAA records and saw immediate relief to my problem. Due to dns propagation and record ttl, it may take longer for others to see relief.
certbot will try to connect to you using an IPV6 address if it was able to resolve one even though you're expecting the connection via IPV4 and that was the extent of my problems.
I suggested deleting the log so you have only fresh entries before continuing with the command - sudo rm /var/log/letsencrypt/legsencrypt.log - find the "addressUsed" and verify that it's an IPV4 address and not an IPV6 address. if its an IPV6 address, either forward that address at the gateway to your host and verify you're listening on IPV6 as well OR remove the AAAA records in DNS so that letsencrypt will connect to you using IPV4 address instead.
In case anyone still facing this issue, can try below and it worked for me :
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
alias /home/nginx/domains/domain.com/public/acme-challenge/;
}
In my case I denied access to security related files (/.htaccess, /.htpasswd, etc.) via
location ~ /\. {
deny all;
}
Which I changed to
location ~ /\.ht {
deny all;
}
I don't want to unnecessarily repeat things, but it seems there are quite some different situations that can cause a 403 at certificate renewal. For me, it had to do with a changed nginx config because of Wordpress / url rewriting. Using Virtualmin btw.
There is a link above in the comments that refers to an issue on Github. One guy explains brilliantly how the location matches in nginx work and gives a solution for the 403. Still, there might be other issues causing this too.
Thus, for me the solution was to include a location match for /.well-known/.
location ^~ /.well-known/ {
#limit_req [tighter per-ip settings here]; ## kicked this one out
access_log off;
log_not_found off;
#root /var/www/html; ## kicked this one out
autoindex off;
index index.html; # "no-such-file.txt",if expected protos don't need it
try_files $uri $uri/ =404;
}
I am no nginx expert at all, so I would encourage you to read the post and check which parameters are needed for your situation.
Related
I've tried since hours with lots of solutions but cannot get rid of this 403 error on serving a static subdomain with NGINX.
I've tried chmod all my permissions in the directory to the static folder and editing the config file over and over.
NGINX serves beautifully my reverse proxied Node app but shuts down all the static subdomains that once were in the server.
Permissions:
dr-xr-xr-x root root /
drwxr-xr-x root root home
drwx--x--x ca****8sh nginx ca****8sh
lrwxrwxrwx ca****8sh ca****8sh www -> public_html
drwxr-x--- ca****8sh ca****8sh public_html
drwxr-xr-x nginx nginx residenza******.******ano.ch;
config file:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name residenza******.******ano.ch;
root /home/ca****8sh/www/residenza******.******ano.ch/;
location / {
#try_files $uri $uri/ =404;
index index.html;
#autoindex on;
#autoindex_exact_size off;
}
[continues with SSL setup]
I've also tried tweaking things around like enabling autoindex but to no avail.
I'm on despair, please help!
Check which user nginx is using in first line of nginx.conf. It should be either nginx or www-data. then run this command. Replace www-data with nginx if the user is nginx
chown -R www-data /home/ca****8sh/www/residenza******.******ano.ch/
If you are using SELinux like CentOS, run these commands too:
sudo setsebool -P httpd_can_network_connect on
chcon -Rt /home/ca****8sh/www/residenza******.******ano.ch/
I have handled by changing the global nginx user to a higher tier user. This is what was causing the permits issue.
I am trying to install a Let's Encrypt SSL certificate across four sites:
mysite.com
es.mysite.com
fr.mysite.com
de.mysite.com
I ran the following command: certbot --nginx -d mysite.com -d www.mysite.com which worked fine for mysite.com, es.mysite.com, fr.mysite.com. When I ran the sudo certbot --nginx -d de.mysite.com is got the following error:
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: de.mysite.com
Type: unauthorized
Detail: Invalid response from
https://de.mysite.com/.well-known/acme-challenge/te29XBKAQdQBbQxvzPTgfgaFpzM_OUj6b4gSuiuPvOI
[MY IP ADDRESS]: "\r\n\r\n<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML
1.0 Transitional//EN\"
\"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">\r\n<"
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
I then tried to install the certificate manually using the following code: certbot certonly --manual -d de.mysite.com . I was then asked Are you ok with your IP being loggged? I selected Y and hit enter. Then I followed this step:
Create a file containing just this data:
SJpIiQET8X0vehhTjmcPBrm3zsbS1p8f9Mf2oKE5l5w.SkXszSMjtmN2-3gN7kkDhgSElerR3H1MgUc9N8z70n4
And make it available on your web server at this URL:
http://de.mysite.com/.well-known/acme-challenge/SJpIiQET8X0vehhTjmcPBrm3zsbS1p8f9Mf2oKE5l5w
I pressed Enter to Continue and then got the same error:
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: de.mysite.com
Type: unauthorized
Detail: Invalid response from
https://de.mysite.com/.well-known/acme-challenge/SJpIiQET8X0vehhTjmcPBrm3zsbS1p8f9Mf2oKE5l5w
[MY IP ADDRESS]: "\r\n\r\n<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML
1.0 Transitional//EN\"
\"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">\r\n<"
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
Can anyone advise how to resolve this error and successfully install the Let's Encrypt SSL certificate?
Thanks.
I managed to resolve my issue. I had to include the following in my nginx config first:
location ~ /.well-known {
allow all;
}
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /data/wordpress/mysite/;
}
location = /.well-known/acme-challenge/ {
return 404;
}
Then I had to install the Let's Encrypt SSL certificate manually by running certbot certonly --manual -d de.mysite.com and followed the steps to successfully install the certificate.
I'm sending logs to an nginx server and want to dump these logs to a file. When sending one log at a time, I was able to do this using the NginxEchoModule to force nginx to read the body like so:
http {
log_format log_dump '$request_body';
server {
listen 80 default_server;
listen [::]:80 default_server;
access_log /logs/dump log_dump;
location /logs {
echo_read_request_body;
}
}
}
This works fine when I send one log at a time:
POST /logs HTTP/1.1
Host: www.example.com
123456 index.html was accessed by 127.0.0.1
POST /logs HTTP/1.1
Host: www.example.com
123457 favicon.ico was accessed by 127.0.0.1
However when I try to batch logs (to avoid both connection overhead and HTTP header overhead):
POST /logs HTTP/1.1
Host: www.example.com
123456 index.html was accessed by 127.0.0.1
123457 favicon.ico was accessed by 127.0.0.1
This is what shows up in my log file:
123456 index.html was accessed by 127.0.0.1\x0A123457 favicon.ico was accessed by 127.0.0.1
Now my assumption is that because one nginx log line is intended to be one line, it's encoding my new-line characters to ensure this. Is there a way to allow multi-line nginx logs?
Actually got the answer from one of the more experienced engineers at my work this time:
log_format log_dump escape=none '$request_body';
This requires nginx version 1.13.10, but prevents nginx from escaping new-lines in the logs:
$> curl http://localhost/logs -d "Words
dquote> More words"
$> cat /logs/dump
Words
More words
$>
I am trying to use certbot and letsencrypt on my Ubuntu 16.0.4 server, so I can install a mail server.
I am running certbot like this:
sudo /opt/letsencrypt/certbot-auto certonly --agree-tos --webroot -w
/path/to/www/example -d example.com -d www.example.com
I get the following output from certbot (snippet shown below):
Domain: www.example.com
Type: unauthorized
Detail: Invalid response from
http://www.example.com/.well-known/acme-challenge/QEZwFgUGOJqqXHcLmTmkr5z83dbH3QlrIUk1S3JI_cg:
"<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>"
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address.
This is what my directory structure looks like:
root#yourbox:/path/to/www/example$ ls -la
total 12
drwxr-xr-x 3 example root 4096 Nov 1 10:17 .
drwxr-xr-x 5 root webapps 4096 Nov 1 10:13 ..
drwxr-xr-x 2 root root 4096 Nov 1 10:36 .well-known
root#yourbox:/path/to/www/example$
root#yourbox:/path/to/www/example$ cd .well-known/
root#yourbox:/path/to/www/example/.well-known$ ls -la
total 8
drwxr-xr-x 2 root root 4096 Nov 1 10:36 .
drwxr-xr-x 3 example root 4096 Nov 1 10:17 ..
root#yourbox:/path/to/www/example/.well-known$
From above, I can see that the challenge file does not exist - (presumably?) because, it looks like the certbot is unable to write to the folder.
However, I first needed to check that nginx was set up correctly, and that it was serving files from folders starting with a period.
This is the configuration file for nginx for the website (/etc/nginx/sites-available/example):
server {
# Allow access to the letsencrypt ACME Challenge
location ~ /\.well-known\/acme-challenge {
allow all;
}
}
I manually created a testfile (sudo touch /path/to/www/example/fake) and gave it the correct permissions:
root#yourbox:/path/to/www/example/.well-known/acme-challenge$ ls -l
total 0
-rw-r--r-- 1 example webapps 0 Nov 1 10:45 fake
I then tried to access http://www.example.com/.well-known/acme-challenge/fake from a browser - and got a 404 error.
This means I have two errors:
Nginx is not correctly setup to serve files from the .well-known/acme-challenge folder
The file permissions in the /path/to/www/example folder are wrong, so certbot can't write its automatically generated files to the .well-known/acme-challenge folder.
How may I fix these issues?
Your Nginx config file has no config to make your /path/to/www/example/ directory web accessible.
Here's a simple configuration which will put your site live and allow LetsEncyrpt to create a valid certificate. Bare in mind port 80 will need to be accessible.
server {
listen 80;
server_name www.example.co.uk example.co.uk;
root /path/to/www/example;
access_log /var/log/nginx/example.co.uk.log;
error_log /var/log/nginx/example.co.uk.log;
index index.html index.htm index.php;
location ~ /\.well-known\/acme-challenge {
allow all;
}
location / {
try_files $uri $uri/index.html $uri.html =404;
}
}
Change your server_name accordingly, or use your /etc/hosts file to configure a local domain.
I had the same problem which was caused by the following line:
location ~ /\. {
deny all;
}
i added the following ABOVE the line mentioned above this:
location ~ /\.well-known\/acme-challenge {
allow all;
}
I have followed instructions and still I cant password protect my site. This is what my app-nginx.config looks like:
server {
listen 80;
server_name Server_Test;
auth_basic "Restricted";
auth_basic_user_file /usr/local/nginx/conf/htpasswd;
...
}
Where am I going wrong? I copied and pasted this right from a tutorial site.
Make sure Nginx can access the password file. Paths for the auth_basic_user_file are relative to the directory of nginx.conf. So if your nginx.conf is located in /usr/local/nginx you can change your directive to:
auth_basic_user_file conf/htpasswd;
and the file must be readable.
This file should be readable by workers, running from unprivileged
user. E. g. when nginx run from www you can set permissions as:
chown root:nobody htpasswd_file
chmod 640 htpasswd_file
-- from http://wiki.nginx.org/HttpAuthBasicModule
Just made my nginx server to work, and even configured it to protect my root folder access. I'd like to share my findings with you and on the way also give a good and working answer to the question in this page.
As a new user to nginx (Version 1.10.0 - Ubuntu).
The first problem I've got was to know the file locations, so here are the critical locations:
Know your locations:
Main folder location: /etc/nginx
Default site location: /var/www/ or even /ver/www/html/ (inside the html folder will be the index.html file - hope you know what to do from there.)
Configuration files:
Main configuration file: /etc/nginx/nginx.conf
Current site server conf: /etc/nginx/sites-enabled (upon first installation there is a single file there that is called default, and you'll need to use sudo to be able to change it (for example:
sudo vi default)
Add password:
So, now that e know the players (for a static out-of-the-box site anyway) let's put some files in the 'html' folder and let's add password protection to it.
To setup a password we need to do 2 things:
create a passwords file (with as many users as we want, but I'll settle with 1).
Configure the current server ('default') to restrict this page and use the file in 1 to enable the password protection.
1. Let's create a password:
The line I'd like to use for this is:
sudo htpasswd -c /etc/nginx/.htpasswd john (you'll get a prompt to enter and re-enter the password) of you can do it in a single line here:
sudo htpasswd -c /etc/nginx/.htpasswd john [your password]
I'll explain each part of the command:
sudo htpasswd - do it using higher permission.
-c - for: create file (to add another user to an existing user skip this argument)
/etc/nginx/.htpasswd - the name of the file created
('.htpsswd' in the folder /etc/nginx)
john is the name of the user (to enter in the prompted 'user' field)
password is the needed password for this specific user name. (when prompted..)
Usually the htpasswd command won't work for you, so you'll have to install it's package:
Use: sudo apt-get install apache2-utils (if it fails try using sudo apt-get update and try again)
2. Let's configure the server to use this file for authentication
Let's use this line to edit the current (default) server conf file:
sudo vi /etc/nginx/sites-enabled/default (You don't have to use 'vi' but I like it..)
The file looks like this after removing most of the comments (#)
# Default server configuration
#
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
}
We'll need to add two lines inside the block the location ('/' points to the root folder of the site) so it'll look like this:
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
auth_basic "Restricted Content";
auth_basic_user_file /etc/nginx/.htpasswd;
}
I'll explain these new lines:
auth_basic "Restricted Content"; - defines the type of access management
auth_basic_user_file /etc/nginx/.htpasswd; - defines the file we've created (/etc/nginx/.htppasswd) as the passwords file for this authentication.
Let's restart the service and enjoy a password protected site:
sudo service nginx restart
Voila - enjoy...
Here are some more great tutorials for this:
Very good explanation
Another goo tutorial