How to use fail2ban with Meteor on Nginx - meteor

I'm setting up a Digital Ocean droplet running Ubuntu 18.04 to host my Meteor 1.8 app via Phusion Passenger / Nginx. I will configure it to use SSL with Lets Encrypt.
fail2ban is a recommended tool to protect against brute force attacks, but I can't work out how to use it with Meteor, or even if it's appropriate. I've read several tutorials but there is something basic I don't understand.
I have used server location blocks in my Nginx config file to block access to all urls by default and only allow the necessary ones:
# deny all paths by default
location / { deny all; }
# allow sockjs
location /sockjs { }
# allow required paths
location = / { }
location /my-documents { }
location /login { }
location /register { }
...
# serve js and css
location ~* "^/[a-z0-9]{40}\.(css|js)$" {
root /var/www/myapp/bundle/programs/web.browser;
access_log off;
expires max;
}
# serve public folder
location ~ \.(jpg|jpeg|png|gif|mp3|ico|pdf|svg) {
root /var/www/myapp/bundle/pubilc;
access_log off;
expires max;
}
# deny unwanted requests
location ~ (\.php|.aspx|.asp|myadmin) {
return 404;
}
My basic question is: would fail2ban detect failed attempts to login to my Meteor app, and if so, how? If not, then what's the purpose of it? Is it looking for failed attempts to login to the server itself? I have disabled password access on the droplet - you can only connect to the server via ssh.
And how does this relate to Nginx password protection of sections of the site? Again, what's this for and do I need it? How would it work with a Meteor app?
Thank you for any help.

Any modern single page application using React/Vue/Blaze as its rendering engine simply doesn't send url requests to the server for each page in the UI.
Meteor loads all its assets at the initial page load, and the rest is done over sockets using DDP. It might load static assets as separate requests.
Any server API calls implemented as Meteor methods also won't show up in server logs.
So fail2ban will detect some brute force attacks, and could therefore be useful in blocking those attacks and preventing them from swamping the server, but it won't detect failed login attempts.
You could adapt the application to detect failed logins, and call the fail2ban API to log them (if that is possible). Otherwise I'm not sure whether it is totally appropriate for protecting a meteor server.

My conclusion is that yes, fail2ban is worth using with Meteor. As far as I can tell, Nginx password protection isn't relevant, but there's other good stuff you can do.
Firstly, I think it's worth using fail2ban on any server to block brute force attacks. My test server has been online only a couple of days with no links pointing to it and already I'm seeing probes to paths like wp-admin and robots.txt in the Nginx logs. These probes can't achieve anything because the files don't exist, but I think it's safer to ban repeated calls.
I worked from this tutorial to set up a jail for forbidden urls, modifying the jail definition to point to my actual Nginx log file.
Then, I've modified my app to record failed login attempts and written a custom jail and filter to block these. It may be that nobody will bother to write a script to attack a Meteor site specifically, and my Meteor app has throttling on the logins, but again I feel it's better to be more careful than less.
Here's how I've modified my app:
server/main.js
const buildServerLogText = ((text) => {
const connection = Meteor.call('auth.getClientConnection');
return `${moment(new Date()).format('YYYY/MM/DD HH:mm:ss')} ${text}, client: ${connection.clientAddress}, host: "${connection.httpHeaders.host}"`;
});
// log failed login attempts so fail2ban can find them in the Nginx logs
Accounts.onLoginFailure(() => {
const text = buildServerLogText('[error]: Meteor login failure');
console.log(text);
});
This writes failed login attempts to the server in this form:
2020/03/10 15:40:20 [error]: Meteor login failure, client: 86.180.254.102, host: "209.97.135.5"
The date format is important, fail2ban is fussy about this.
I also had to set passenger_disable_log_prefix on; in my Phusion Passenger config file to stop a prefix being added to the log entry. As I'm deploying my app with Phusion Passenger, the Nginx config is in the Passenger config file.
Then my fail2ban filter is like this:
/etc/fail2ban/filter.d/nginx-login-failure.conf
[Definition]
failregex = ^ \[error\]:.*Meteor login failure.*, client: <HOST>, .*$
ignoreregex =

Related

nginx proxy_pass to location returns 401 error

I have a WebDAV server set up, with its root folder /SomeVolume/webdav/contents and address 192.168.1.2:12345. User and password are set and the server can be accessed from a browser.
I am directing a domain name to the same machine using nginx, like this:
server {
server_name my.domain.me;
location / {
proxy_pass http://192.168.1.2:12345;
}
# plus the usual Certbot SSL stuff
This was working perfectly well, with HTTPS authentication and everything. I am using a third-party application that uses that server and it was working OK too.
I wanted to make this a bit more tidy and only changed couple of things:
WebDav server root to /SomeVolume/webdav (instead of /SomeVolume/webdav/contents), restarted the server.
proxy_pass http://192.168.1.2:12345 changed to proxy_pass http://192.168.1.2:12345/contents. Restarted ngninx.
Nothing else was modified.
I can still login through the browser, but the third-party application has stopped working because it gets authentication errors (401). Although if I try to login locally with http://192.168.1.2:12345/contents/ it works just fine.
What am I not understanding here? Is it some caching problem with the third-party application or have I misunderstood how location & proxy_pass work?
Thanks.

How to set exceptions for NGINX load balancer

Is it possible to configure NGINX loadbalancer in least_conn mode to make exception for certain paths?
I want to configure loadbalancer in such way that all requests required for single login operation are sent to the same backend application instance.
I have frontend app accessing duplicated backend app via nginx load balancer. All apps are deployed on Tomcat 8.5 and backend instances have configured session replication between Tomcats.
My problem is that when user is authenticated using OAuth-2.0 authorization_code grant method, frontend app gets authorization code but due to conneting to backend through load balancer it tries to obtain token using this code from another machine resulting in InvalidGrantException.
Using ip_hash mode or it's variations isn't solution for this problem as it is unstable when application is accessed through VPN.
Yes you can achieve what you want by declaring two locations and treat them differently. See example below and check this question where it explains how the priority works.
http {
upstream myapp1 {
least_conn;
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://myapp1;
}
location /my-special-path/ {
proxy_pass http://srv1.example.com;
}
}
}
Above is a solution mainly based in your first statement that you want routing based on certain paths. If your problem is more complicated i.e these paths are dynamically created etc you can share an example to be easier to understand your specific situation.
UPDATE
Based on comment. I would really suggest to go troubleshoot your backend in order to be synced. That being said if you really want a solution for the exact problem from your nginx I would do the following:
On every response I would add a specific header which specific backend answered this request. add_header X-Upstream $upstream_addr;
On this specific path I would serve the request based on the value of that header. proxy_pass http://$http_x_upstream;
So the config would look like this:
http {
...
server {
...
location / {
add_header X-Upstream $upstream_addr always;
proxy_pass http://myapp1;
}
location /authorize/ {
add_header X-Upstream $upstream_addr always;
proxy_pass http://$http_x_upstream;
}
}
}
NOTE: Security. If you go down this path be careful that you are routing your requests based on a value that your client can manipulate. So be sure that you are at least validating this value. Check this answer for validating headers with nginx.

Disable symfony application during maintenance

I'm looking for any way to disabled a Symfony application during maintenance support. I mean, in a very simple way:
1) I have a application where people can enter to see the info of the database.
2) The admin could change the info of the database. During this period of time, the database info should not be accesible because it has been deleting and updating.
3) What I want is, if there is any way to block the application during this maintenance period and redirect users (not the admin user) to a maintenance notice page.
I remember there was a global function which redirect all urls, but I don't remember very well.
During the maintenance period I could stablish a param in the Database (or in any other way), and ask for this value to know if the application is in maintenance period or not, to redirect to the normal url o redirect to the notive maintenance page.
If you store a param in the database to know if the admin is updating data then its fairly simple to use a kernel request listener:
Test on the database value
If Admin is the current user
see here : Events and event listeners
You can do this without changing any of the project code, directly in the webserver config. The following works on NGINX (but use with caution, because if is evil) and it should be no problem to reproduce it for Apache.
location / {
# if the user is not you / an admin
if ($remote_addr != "your.ip.address") {
# return http code 503 (The server is currently unavailable (because it is overloaded or down for maintenance).)
return 503;
}
# otherwise, go ahead as usual
try_files $uri $uri/ /app.php?$is_args$args;
}
# show specific page when http code 503
error_page 503 #maintenance;
location #maintenance {
# which is maintenance.html
rewrite ^(.*)$ /maintenance.html break;
}
IP check was a quick solution in my case. You could also check for specific cookies, etc. The idea stays the same: the application is independent of this.

Certbot /.well-known/acme-challenge

Should I leave the /.well-known/acme-challenge always exposed on the server?
Here is my config for the HTTP:
server {
listen 80;
location '/.well-known/acme-challenge' {
root /var/www/demo;
}
location / {
if ($scheme = http) {
return 301 https://$server_name$request_uri;
}
}
Which basically redirects all the requests to https, except for the acme-challenge (for auto renewal). My question: Is it alright to keep location '/.well-known/acme-challenge' always exposed on port 80? Or better to comment/uncomment it manually, when need to reissue the certificate? Are there any security issues with that?
Any advise or links to read for about the this location appreciated. Thanks!
Acme challenge link only needed for verifying domain to this ip address
You do not need to keep the token available once your certificate has been signed. However, there is not much harm in leaving it available either, as explained by a Certbot engineer:
The token is part of a particular challenge which is no longer active, from the ACME server's point of view, after the server has tried to validate it. It would reveal a little bit of information about how you get certificates, but should not allow someone else to issue certificates for your site or impersonate you.
In case someone finds this helpful, I just asked my hosting customer support and they explained it as per following...
Yes, “well-known” folder is automatically created by cPanel in order
to validate your domain for AutoSSL purposes. AutoSSL is an added
feature of cPanel/WHM which offer you free SSL certificate for your
domains, its also known as self-signed SSL certificate. The folder
.well-known created while the time of the domain validation process as
a part of AutoSSL installation
And it is not the file that needs to be removed, It does not cause any
issue.
The period before the file name (.well-known) means it is a hidden directory. If your server gets hacked the information is available to the hacker.

Secure remote sqlbuddy in Nginx

I've added sqlbuddy on my nginx server for remote management of my db. To that I've added .htaccess and password protection. However if I click on cancel in the authentication prompt / window I can still access the login for sqlbuddy. I can login and see a few parts of the UI. If I access the browser source I can see more data. How do I stop this? What's the best setup for this in nginx?
This is nginx conf:
location /sqlbuddy {
auth_basic "Administrator Login";
auth_basic_user_file /opt/nginx/html/sqlbuddy/.htpasswd;
}

Resources