Certbot /.well-known/acme-challenge - nginx

Should I leave the /.well-known/acme-challenge always exposed on the server?
Here is my config for the HTTP:
server {
listen 80;
location '/.well-known/acme-challenge' {
root /var/www/demo;
}
location / {
if ($scheme = http) {
return 301 https://$server_name$request_uri;
}
}
Which basically redirects all the requests to https, except for the acme-challenge (for auto renewal). My question: Is it alright to keep location '/.well-known/acme-challenge' always exposed on port 80? Or better to comment/uncomment it manually, when need to reissue the certificate? Are there any security issues with that?
Any advise or links to read for about the this location appreciated. Thanks!

Acme challenge link only needed for verifying domain to this ip address

You do not need to keep the token available once your certificate has been signed. However, there is not much harm in leaving it available either, as explained by a Certbot engineer:
The token is part of a particular challenge which is no longer active, from the ACME server's point of view, after the server has tried to validate it. It would reveal a little bit of information about how you get certificates, but should not allow someone else to issue certificates for your site or impersonate you.

In case someone finds this helpful, I just asked my hosting customer support and they explained it as per following...
Yes, “well-known” folder is automatically created by cPanel in order
to validate your domain for AutoSSL purposes. AutoSSL is an added
feature of cPanel/WHM which offer you free SSL certificate for your
domains, its also known as self-signed SSL certificate. The folder
.well-known created while the time of the domain validation process as
a part of AutoSSL installation
And it is not the file that needs to be removed, It does not cause any
issue.

The period before the file name (.well-known) means it is a hidden directory. If your server gets hacked the information is available to the hacker.

Related

nginx proxy_pass to location returns 401 error

I have a WebDAV server set up, with its root folder /SomeVolume/webdav/contents and address 192.168.1.2:12345. User and password are set and the server can be accessed from a browser.
I am directing a domain name to the same machine using nginx, like this:
server {
server_name my.domain.me;
location / {
proxy_pass http://192.168.1.2:12345;
}
# plus the usual Certbot SSL stuff
This was working perfectly well, with HTTPS authentication and everything. I am using a third-party application that uses that server and it was working OK too.
I wanted to make this a bit more tidy and only changed couple of things:
WebDav server root to /SomeVolume/webdav (instead of /SomeVolume/webdav/contents), restarted the server.
proxy_pass http://192.168.1.2:12345 changed to proxy_pass http://192.168.1.2:12345/contents. Restarted ngninx.
Nothing else was modified.
I can still login through the browser, but the third-party application has stopped working because it gets authentication errors (401). Although if I try to login locally with http://192.168.1.2:12345/contents/ it works just fine.
What am I not understanding here? Is it some caching problem with the third-party application or have I misunderstood how location & proxy_pass work?
Thanks.

How to use fail2ban with Meteor on Nginx

I'm setting up a Digital Ocean droplet running Ubuntu 18.04 to host my Meteor 1.8 app via Phusion Passenger / Nginx. I will configure it to use SSL with Lets Encrypt.
fail2ban is a recommended tool to protect against brute force attacks, but I can't work out how to use it with Meteor, or even if it's appropriate. I've read several tutorials but there is something basic I don't understand.
I have used server location blocks in my Nginx config file to block access to all urls by default and only allow the necessary ones:
# deny all paths by default
location / { deny all; }
# allow sockjs
location /sockjs { }
# allow required paths
location = / { }
location /my-documents { }
location /login { }
location /register { }
...
# serve js and css
location ~* "^/[a-z0-9]{40}\.(css|js)$" {
root /var/www/myapp/bundle/programs/web.browser;
access_log off;
expires max;
}
# serve public folder
location ~ \.(jpg|jpeg|png|gif|mp3|ico|pdf|svg) {
root /var/www/myapp/bundle/pubilc;
access_log off;
expires max;
}
# deny unwanted requests
location ~ (\.php|.aspx|.asp|myadmin) {
return 404;
}
My basic question is: would fail2ban detect failed attempts to login to my Meteor app, and if so, how? If not, then what's the purpose of it? Is it looking for failed attempts to login to the server itself? I have disabled password access on the droplet - you can only connect to the server via ssh.
And how does this relate to Nginx password protection of sections of the site? Again, what's this for and do I need it? How would it work with a Meteor app?
Thank you for any help.
Any modern single page application using React/Vue/Blaze as its rendering engine simply doesn't send url requests to the server for each page in the UI.
Meteor loads all its assets at the initial page load, and the rest is done over sockets using DDP. It might load static assets as separate requests.
Any server API calls implemented as Meteor methods also won't show up in server logs.
So fail2ban will detect some brute force attacks, and could therefore be useful in blocking those attacks and preventing them from swamping the server, but it won't detect failed login attempts.
You could adapt the application to detect failed logins, and call the fail2ban API to log them (if that is possible). Otherwise I'm not sure whether it is totally appropriate for protecting a meteor server.
My conclusion is that yes, fail2ban is worth using with Meteor. As far as I can tell, Nginx password protection isn't relevant, but there's other good stuff you can do.
Firstly, I think it's worth using fail2ban on any server to block brute force attacks. My test server has been online only a couple of days with no links pointing to it and already I'm seeing probes to paths like wp-admin and robots.txt in the Nginx logs. These probes can't achieve anything because the files don't exist, but I think it's safer to ban repeated calls.
I worked from this tutorial to set up a jail for forbidden urls, modifying the jail definition to point to my actual Nginx log file.
Then, I've modified my app to record failed login attempts and written a custom jail and filter to block these. It may be that nobody will bother to write a script to attack a Meteor site specifically, and my Meteor app has throttling on the logins, but again I feel it's better to be more careful than less.
Here's how I've modified my app:
server/main.js
const buildServerLogText = ((text) => {
const connection = Meteor.call('auth.getClientConnection');
return `${moment(new Date()).format('YYYY/MM/DD HH:mm:ss')} ${text}, client: ${connection.clientAddress}, host: "${connection.httpHeaders.host}"`;
});
// log failed login attempts so fail2ban can find them in the Nginx logs
Accounts.onLoginFailure(() => {
const text = buildServerLogText('[error]: Meteor login failure');
console.log(text);
});
This writes failed login attempts to the server in this form:
2020/03/10 15:40:20 [error]: Meteor login failure, client: 86.180.254.102, host: "209.97.135.5"
The date format is important, fail2ban is fussy about this.
I also had to set passenger_disable_log_prefix on; in my Phusion Passenger config file to stop a prefix being added to the log entry. As I'm deploying my app with Phusion Passenger, the Nginx config is in the Passenger config file.
Then my fail2ban filter is like this:
/etc/fail2ban/filter.d/nginx-login-failure.conf
[Definition]
failregex = ^ \[error\]:.*Meteor login failure.*, client: <HOST>, .*$
ignoreregex =

Any posible way to make secure links for multiple ips?

I'm running a fairly well used CDN system using Nginx and I need to secure my links so that they aren't shared between users.
The current config works perfectly..
# Setup Secure Links
secure_link $arg_md5,$arg_expires;
secure_link_md5 "$secure_link_expires$uri$remote_addr secret";
if ($secure_link = "") { return 403; }
if ($secure_link = "0") { return 410; }
However with the internet going ever more mobile and with many users now coming from university campuses etc I'm seeing tons of failed requests, and annoyed end users because the requester's IP has changed between requests.
The requesting IP is almost always in the same range, so for example:
Original Request: 192.168.0.25
File Request: 192.168.0.67
I'd be happy to lock these secure links down to a range, such as
192.168.0.0 - 192.168.0.255
or go even further and make it even bigger
192.168.0.0 - 192.168.255.255
but I can't figure out a way to do this in nginx, or if the secure_link feature even supports this.
If this isn't possible - does anyone have any other ideas on how to secure links that would be less restrictive, but still be reasonably safe? I had a look at using the browser string instead, but many of our users have download managers or use 3rd part desktop clients - so this isn't viable.
I'm very much trying to do this without having to have any dynamic code to check a remote database as this is very high volume and I'd rather not have that dependancy.
You can use more than one auth directive within Nginx, so you could drop the IP from the secure link and specify that as a separate directive.
Nginx uses CIDR ranges, so for your example it would simply be a case of
allow 192.168.0.0/16;
deny all;
You can use the map approach
map $remote_addr $auth_addr {
default $remote_addr;
~*^192\.168\.100 192.168.100;
~*^192\.169 192.169;
}
And then later use something
secure_link_md5 "$secure_link_expires$uri$auth_addr secret";
I have not used such an approach, but I am assuming it should work. If it doesn't please let me know
I managed to get this working thanks to #Tarun Lalwani for pointing out the maps idea.
# This map breaks down $remote_addr into octets
map $remote_addr $ipv4_first_two_octets {
"~(?<octet1>\d+)\.(?<octet2>\d+)\.(?<octet3>\d+)\.(?<octet4>\d+)" "${octet1}.${octet2}";
default "0.0";
}
location / {
# Setup Secure Links secure_link $arg_md5,$arg_expires;
secure_link_md5 "$secure_link_expires$uri$ipv4_first_two_octets secret";
}

Website migration with Nginx

Having a bit of a learning curve issue here. Trying to do a site migration.
We have and old website say website.co.uk that we are closing down.
We also have an existing website say website.com.
I have been able to map all the urls from the first website to the second one using mapping in nginx i.e.
/page-1 /new-page-1;
Then I have the follow server block.
`server {
server_name www.website.co.uk website.co.uk;
# Old website redirect
if ($redirect_uri) {
return 301 https://website.com$redirect_uri;
}
}`
This seems to work fine.
However trying to visit the top level domain www.website.co.uk I get a message that "Your connection is not private" and "This server could not prove that it is www.website.co.uk; its security certificate is from website.com. This may be caused by a misconfiguration or an attacker intercepting your connection".
All the redirects worked except visiting website.co.uk going to website.com.
Do I need another SSL cert?
Thanks!

Catch SSL cert request error so as to redirect to the correct site

We are using IIS 6 and ASP.Net, When users make secure page requests using
https://somesite.com/securePage.aspx
the user gets an error:
Error code: ssl error bad cert domain
The certificate was issued to www.somesite.com and indicates that somesite.com uses an invalid security certificate.
I was hoping to be able to catch the request in the Application BeginRequest event but the SSL error occurs before this. In order to invoke the Application BeginRequest event the user needs to click through the certificate error message. Is it possible to redirect in code or does this fix need to occur within IIS?
The only solution is to include the second domain in the certificate with a SubjectAlternativeName. Some certificate authorities will allow you to do this without extra cost.
Everything else would only happen after the ssl connection is established and therefor after the error is encountered by the user.
With HTTPS the ssl connection is negotiated before any of the HTTP headers are sent to the server, including the Host:-header that tells the server for which virtual host the request is actually intended.
I was able to solve this problem using IIS's rewrite feature. Turned out to be really easy to fix and we didn't have to purchase a new cert.
HOP is correct with his answer. Owen also if we had the luxury of using IIS 7 as Rewrite rules similar to the mod_rewrite rule of Apache is now possible from within IIS. After further investigation today together with our Network Admins and our SSL Cert provider applying a SAN to our Certificate is quite possible and at no charge.However due to political issues within the ORG it was decided that DEV (my group) institute a redirect to the registered domain within the Application BeginRequest event. For each request we will check that the URL points to our FQDN. If the request is made to the 'Short Name' then we will point it to the FQDN always by appending the www to the short name that will be returned by the context.Host method.
No doubt this will increase chattiness etc.!
I did some testing on this on one of my servers and here is what I found.
We have a UCC certificate which will work for 5 domains. My 5 domains are
master mydomain.com
sub alt names:
mysite.com
myweb.com
thissite.com
www.thissite.com
The reason it is set up like that is because I didn't quite understand that it wanted www. when I made it.
So,
https://mydomain.com - works
https://www.mydomain.com - works
https://www.mysite.com - ERROR
https://mysite.com - works
https://thissite.com - works
https://www.thissite.com - works.
If you have a UCC cert (it seems you do) add a subject alternate name with the www on the domain in question. It will then work for both.
I went through all the steps of trying to redirect with .htaccess and server side scripting. But, hop is right, it will not do anything if you dont fix the cert.
You will likely have to drop a domain when you rekey your cert. Just remember which one you dropped and get a new cert for that one. I will forever and always REFUSE to buy UCC certs from now on. More problems than they are worth. 1 domain = 1 cert.
If your domain is making money then its worth the money, if your not making any money - do you really need the cert?
Intresting. I have never observed this behavior in any site until I saw this question. Even google has this problem. The url below gives the bad cert error
https://google.com/accounts/
Btw, Most of the sites has a subdomain to which they protect it with a certificate. One vote up for the question.
In Apache this is usually done with mod_rewrite:
RewriteEngine On
RewriteCond %{HTTP_HOST} ^example\.com$ [NC]
RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L]
Google for "rewrite URL IIS", you'll find some equivalents for IIS.

Resources