Enable Letsencrypt in Debian 9 with Nginx for Gogs - nginx

I just installed Gogs on a VPS with the help of the tuto (https://gogs.io/docs/installation/install_from_source).
I have a sub domain to reach my gogs instance: git.mydomainname.com and it works: http://git.mydomainname.com goes to my gogs instance with a reverse proxy.
I would like to have my gogs protected through SSL, so I would like to install LetsEncrypt using the following tuto (https://certbot.eff.org/#debianstretch-nginx).
I would like to say that I am new to system administration and don't necessarily understand everything I did during the Gogs install.
I am also new to Nginx (more used to Apache).
Here is the process I followed:
$ sudo certbot certonly
Saving debug log to /var/log/letsencrypt/letsencrypt.log
How would you like to authenticate with the ACME CA?
-------------------------------------------------------------------------------
1: Place files in webroot directory (webroot)
2: Spin up a temporary webserver (standalone)
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 1
Please enter in your domain name(s) (comma and/or space separated) (Enter 'c'
to cancel):git.mydomainname.com
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for git.mydomainname.com
Select the webroot for git.mydomainname.com:
-------------------------------------------------------------------------------
1: Enter a new webroot
-------------------------------------------------------------------------------
Press 1 [enter] to confirm the selection (press 'c' to cancel): /home/git/go/src/github.com/gogits/gogs
** Invalid input **
Press 1 [enter] to confirm the selection (press 'c' to cancel): 1
Input the webroot for git.mydomainname.com: (Enter 'c' to cancel):/home/git/go/src/github.com/gogits/gogs
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. git.mydomainname.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://git.mydomainname.com/.well-known/acme-challenge/N4rMGzoq1Bwyt9MP9fUlVY3_mDnJfRYpQkdvc7WrNJs: "<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>"
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: git.mydomainname.com
Type: unauthorized
Detail: Invalid response from
http://git.mydomainname.com/.well-known/acme-challenge/N4rMGzoq1Bwyt9MP9fUlVY3_mDnJfRYpQkdvc7WrNJs:
"<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>"
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address.
So I checked the error, the DNS A record is OK.
I also found another tuto in french (https://www.grafikart.fr/formations/serveur-linux/nginx-ssl-letsencrypt) to help me and I noticed that I had to update my nginx config for the website, I did, despite I have a reverse proxy (maybe the issue is here).
server {
listen 80;
server_name git.mydomainname.com
location ~ /\.well-known/acme-challenge {
allow all;
}
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
location / {
proxy_pass http://localhost:port_number;
}
}
Thanks for your help.

You are proxifying all of your requests to http://localhost:port_number, but this program probably doesn't know how to handle lets-encrypt request.
Instead, you should change your .well-known location to :
location ^~ /.well-known/acme-challenge/ {
allow all;
root /var/www/letsencrypt;
}
And when the certbot ask you for a webroot, you can answer /var/www/letsencrypt
Note: you can change /var/www/letsencrypt to any directory you want. It just need to be created first, and readable by your nginx's user

Related

simple Nginx proxy_pass gives back 502

Im trying to do a very basic task of redirecting between two servers based on "http referer".
I have tried a basic If, but I know IF IS EVIL in nginx and didnt manage to make it work, so came up with this solution, of directing based on valid_referer. but Im keep getting 502.
server {
location /application1 {
valid_referers server_names
.click2dad.net*;
if ($invalid_referer){
set $1 '';
proxy_pass http://localhost:8080/$1;
}
}
}
server 2:
server {
listen 8080;
root /home/ubuntu/data/server2;
}
btw I used set $1 = "" , since I kept getting an error if proxy_pass cannot have uri under if statment
Thanks
Make sure you have something running on http://localhost:8080/ .
Do something to check whether port 8080 is used by any process, like
netstat -tulpn | grep 8080
on your terminal.

How to block visitors from particular country with nginx and GeoIP Module

I want to block a particular country's visitors to access my website www.mainwebsite.com through Nginx and GeoIP Module.
First I tried on www.test.com. What steps I followed on test website,www.test.com, before trying on www.mainwebsite.com
Installing GeoIP:
sudo apt update && sudo apt-get install geoip-database
Check GeoIP Module is installed or not:
nginx -V 2>&1|grep --color=always with-http_geoip_module
Download the GeoIP Database:
sudo mkdir /etc/nginx/GeoIP/
Placed GeoIP.dat file to /etc/nginx/GeoIP/ location.
Configure Nginx and Virtual Host.
sudo vi /etc/nginx/nginx.conf
http{
##
# Basic Settings
##
geoip_country /etc/nginx/GeoIP/GeoIP.dat;
map $geoip_country_code $allowed_country {
default yes;
IN no;
}
}
Save and exit.
sudo vi /etc/nginx/site-available/test.com
Added the map line in starting outside of server{......}
map $geoip_country_code $allowed_country {
default yes;
IN no;
}
After that, inside server{......} setting, add the IF condition.
if ($allowed_country = no) {
return 403;
}
Save and exit.
Reload and restart nginx
sudo service nginx reload
sudo service nginx restart
So www.test.com is directly hosted on Ec2 instance test-server-01 with public Network/IP, Blocking worked and users were not able to access from blocked country.
www.mainwebsite.com is hosted to classic load balancer and ec2 instances are attached to classic load balancer.
For testing, I created 2 replica server of test-server-01 server and created new load balancer and attached both replica servers behind the load balancer and pointed www.test.com to new load balancer. But Geo Country blocking didn't work so I added 2 below lines above IF condition which (If condition) is mentioned in point 5, then blocking worked.
real_ip_header X-Forwarded-For;
set_real_ip_from 0.0.0.0/0;
Now I followed the same steps 1 to 6 for www.mainwebsite.com and made the changes in nginx.conf & /etc/nginx/site-available/mainwebsite.com but country blocking didn't work.
I have a doubt here that, for www.test.com, the contents of /etc/nginx/site-available/test.com and linked file /etc/nginx/site-enabled/test.com are same.
But for www.mainwebsite.com, the content of files /etc/nginx/site-available/mainwebsite.com and /etc/nginx/site-enabled/mainwebsite.com are not same.
/etc/nginx/site-enabled/mainwebsite.com has some extra contents like:
Outside of server{} block-
# Expires map
map $sent_http_content_type $expires {
default off;
text/html epoch;
text/css max;
application/javascript max;
~image/ max;
application/font-woff max;
}
and inside the server{} block.
server_name www.mainwebsite.com;
rewrite ^/blog/blogs$ https://www.mainwebsite.com/blogs permanent;
rewrite ^/companies https://www.mainwebsite.com.com/company permanent;
rewrite ^/events-2/* https://www.mainwebsite.com/events permanent;
Is this actual reason that's why country blocking is not working? Or there can be other reasons? Please help me out.

Letsencrypt unable to verify the domain

I have the following config working:
upstream app_admin {
server 127.0.0.1:8080;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name admin.test.com;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://app_admin;
proxy_redirect off;
}
# for Let's Encrypt to work properly
location ^~ /.well-known {
root /var/www/;
default_type "text/plain";
allow all;
}
}
I installed Letsencrypt and ran this command, but I get errors:
/opt/letsencrypt# ./letsencrypt-auto certonly -a webroot --webroot-path=/var/www -d admin.test.com
Performing the following challenges:
http-01 challenge for admin.test.com
Using the webroot path /var/www for all unmatched domains.
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. admin.test.com (http-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://admin.test.com/.well-known/acme-challenge/zlmdu1lWhUxZdwyV_1Kf--vgzEM6ETXr_qZzwR4uf6pM: Timeout
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: admin.test.com
Type: connection
Detail: Fetching
http://admin.test.com /.well-known/acme-challenge/zlmdu1lWhUbfdwyV_1Kf--vgzEM6ETXr_qZytfR4I6pM:
Timeout
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
I can get to my site by going to admin.test.com without issues.
EDIT1:
With this command, I got the following:
$ ./letsencrypt-auto certonly --standalone
Please enter in your domain name(s) (comma and/or space separated) (Enter 'c'
to cancel):admin.test.com
Obtaining a new certificate
Performing the following challenges:
tls-sni-01 challenge for admin.test.com
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. admin.test.com (tls-sni-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Timeout
Preformatted text
Again, I CAN get to my site with: admin.test.com (port 80).
Looking at the bottom of the log
/var/log/letsencrypt/letsencrypt.log
To fix these errors, please make sure that your domain name was entered correctly and the DNS A record(s) for that domain contain(s) the right IP address. Additionally, please check that your computer has a publicly routable IP address and that no firewalls are preventing the server from communicating with the client. If you're using the webroot plugin, you should also verify that you are serving files from the webroot path you provided.
2017-06-29 03:09:34,776:INFO:certbot.auth_handler:Cleaning up challenges
2017-06-29 03:09:34,777:DEBUG:certbot.plugins.standalone:Stopping server at :::443...
2017-06-29 03:09:35,014:DEBUG:certbot.log:Exiting abnormally:
Traceback (most recent call last):
File "/root/.local/share/letsencrypt/bin/letsencrypt", line 11, in
sys.exit(main())
File "/root/.local/share/letsencrypt/local/lib/python2.7/site-packages/certbot/main.py", line 743, in main
return config.func(config, plugins)
File "/root/.local/share/letsencrypt/local/lib/python2.7/site-packages/certbot/main.py", line 683, in certonly
lineage = _get_and_save_cert(le_client, config, domains, certname, lineage)
File "/root/.local/share/letsencrypt/local/lib/python2.7/site-packages/certbot/main.py", line 82, in _get_and_save_cert
lineage = le_client.obtain_and_enroll_certificate(domains, certname)
File "/root/.local/share/letsencrypt/local/lib/python2.7/site-packages/certbot/client.py", line 344, in obtain_and_enroll_certificate
certr, chain, key, _ = self.obtain_certificate(domains)
File "/root/.local/share/letsencrypt/local/lib/python2.7/site-packages/certbot/client.py", line 313, in obtain_certificate
self.config.allow_subset_of_names)
File "/root/.local/share/letsencrypt/local/lib/python2.7/site-packages/certbot/auth_handler.py", line 81, in get_authorizations
self._respond(resp, best_effort)
File "/root/.local/share/letsencrypt/local/lib/python2.7/site-packages/certbot/auth_handler.py", line 138, in _respond
self._poll_challenges(chall_update, best_effort)
File "/root/.local/share/letsencrypt/local/lib/python2.7/site-packages/certbot/auth_handler.py", line 202, in _poll_challenges
raise errors.FailedChallenges(all_failed_achalls)
FailedChallenges: Failed authorization procedure. admin.test.com (tls-sni-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Timeout
What am I doing wrong?

Upgraded nginx, now munin stats don't work

I was running Debian 8 and the default repo's version of nginx (~1.6). I changed the repo to the nginx one and downloaded the latest version (1.10.0) and now my munin stats don't work, except for the RAM usage. Specifically;
Requests
Requests/connection handled
Nginx status
...all don't work and produce blank graphs. Nginx works as expected and nothing else appears to have changed. I'm not sure what logs to check - munin-graph.log, munin-html.log, munin-update.log and munin-node.log contain no errors or warnings.
Any advice of how to troubleshoot this is welcome!
nginx_* plugins need access to URL http://localhost/nginx_status. Check it by wget http://localhost/nginx_status or munin-run -d nginx_status (In the second case there is plugin name, not location from nginx config}.
Also check nginx config. It must contain something like
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
in the server section

Nginx gives an Internal Server Error 500 after I have configured basic auth

I am trying to do basic auth on Nginx. I have version 1.9.3 up and running on Ubuntu 14.04 and it works fine with a simple html file.
Here is the html file:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title></title>
</head>
<body>
"Some shoddy text"
</body>
</html>
And here is my nginx.conf file:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name 192.168.1.30;
location / {
root /www;
index index.html;
auth_basic "Restricted";
auth_basic_user_file /etc/users;
}
}
}
I used htpasswd to create two users in the "users" file under /etc (username "calvin" password "Calvin", and username "hobbes" password "Hobbes"). It's encrypted by looks like this:
calvin:$apr1$Q8LGMfGw$RbO.cG4R1riIfERU/175q0
hobbes:$apr1$M9KoUUhh$ayGd8bqqlN989ghWdTP4r/
All files belong to root:root. The server IP address is 192.168.1.30 and I am referencing that directly in the conf file.
It all works fine if I comment out the two auth lines and restart nginx, but if I uncomment them, then I do indeed get the username and password prompts when I try to load the site, but immediately thereafter get an Error 500 Internal Server error which seems to persist and I have to restart nginx.
Anybody can see what I'm doing wrong here? I had the same behaviour on the standard Ubuntu 14.04 apt-get version of Nginx (1.4.something) so I don't think it's the nginx version.
Not really an answer to your question as you are using MD5. However as this thread pops up when searching for the error, I am attaching this to it.
Similar errors happen when bcrypt is used to generate passwords for auth_basic:
htpasswd -B <file> <user> <pass>
Since bcrypt is not supported within auth_basic ATM, mysterious 500 errors can be found in nginx error.log, (usually found at /var/log/nginx/error.log), they look something like this:
*1 crypt_r() failed (22: Invalid argument), ...
At present the solution is to generate a new password using md5, which is the default anyway.
Edited to address md5 issues as brought up by #EricWolf in the comments:
md5 has its problems for sure, some context can be found in the following threads
Is md5 considered insecure?
Is md5 still considered secure for single use authentications?
Of the two, speed issue can be mitigated by using fail2ban, by banning on failed basic auth you'll make online brute forcing impractical (guide). You can also use long passwords to try and fortify a bit as suggested here.
Other than that it seems this is as good as it gets with nginx...
I had goofed up when initially creating a user. As a result, the htpasswd file looked like:
user:
user:$apr1$passwdhashpasswdhashpasswdhash...
After deleting the blank user, everything worked fine.
I was running Nginx in a Docker environment and I had the same issue. The reason was that some of the passwords were generated using bcrypt. I resolved it by using nginx:alpine.
Do you want a MORE secure password hash with nginx basic_auth? Do this:
echo "username:"$(mkpasswd -m sha-512) >> .htpasswd
SHA-512 is not considered nearly as good as bcrypt, but it's the best nginx supports at the moment.
I will just stick the htpassword file under "/etc/nginx" myself.
Assuming it is named htcontrol, then ...
sudo htpasswd -c /etc/nginx/htcontrol calvin
Follow the prompt for the password and the file will be in the correct place.
location / {
...
auth_basic "Restricted";
auth_basic_user_file htcontrol;
}
or auth_basic_user_file /etc/nginx/htcontrol; but the first variant works for me
I just had the same problem - after checking log as suggested by #Drazen Urch I've discovered that the file had root:root permissions - after changing to forge:forge (I'm using Forge with Digital Ocean) - the problem went away.
Well, just use correct RFC 2307 syntax:
passwordvalue = schemeprefix encryptedpassword
schemeprefix = "{" scheme "}"
scheme = "crypt" / "md5" / "sha" / altscheme
altscheme = "x-" keystring
encryptedpassword = encrypted password
For example: sha1 for helloworld for admin will be
admin:{SHA}at+xg6SiyUovktq1redipHiJpaE=
I had same error cause i wrote {SHA1} what against RFC syntax. When i fixed it - all worked like a charm. {sha} will not work too. Only correct {SHA}.
First, check out your nginx error logs:
tail -f /var/log/nginx/error.log
In my case, I found the error:
[crit] 18901#18901: *6847 open() "/root/temp/.htpasswd" failed (13: Permission denied),
The /root/temp directory is one of my test directories, and cannot be read by nginx. After change it to /etc/apache2/ (follow the official guide https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-http-basic-authentication/) everything works fine.
===
After executing the ps command we can see the nginx worker process maintained by the user www-data, I had tried to chown www-data:www-data /root/temp to make sure www-data can access this file, but it still not working. To be honest, I don't have a very deep understanding on Linux File Permissions, so I change it to /etc/apache2/ to fix this in the end. And after a test, you can put the .htpasswd file in other directories which in /etc (like /etc/nginx).
I too was facing the same problem while setting up authentication for kibana. Here is the error in my /var/log/nginx/error.log file
2020/04/13 13:43:08 [crit] 49662#49662: *152 crypt_r() failed (22:
Invalid argument), client: 157.42.72.240, server: 168.61.168.150,
request: “GET / HTTP/1.1”, host: “168.61.168.150”
I resolved this issue by adding authentication using this.
sudo sh -c "echo -n 'kibanaadmin:' >> /etc/nginx/htpasswd.users"
sudo sh -c "openssl passwd -apr1 >> /etc/nginx/htpasswd.users"
You can refer this post if you are trying to setup kibana and got this issue.
https://medium.com/#shubham.singh98/log-monitoring-with-elk-stack-c5de72f0a822?postPublishedType=repub
In my case, I was using plain text password by -p flag, and coincidentally my password start with $ character.
So I updated my password and thus the error was gone.
NB: Other people answer helped me a lot to figure out my problem. I am posting my solution here if anyone stuck in a rare case like me.
In my case, I had my auth_basic setup protecting an nginx location that was served by a proxy_pass configuration.
The configured proxy_pass location wasn't returning a successful HTTP200 response, which caused nginx to respond with an Internal Server Error after I had entered the correct username and password.
If you have a similar setup, ensure that the proxy_pass location protected by auth_basic is returning an HTTP200 response after you rule out username/password issues.

Resources