How to set exceptions for NGINX load balancer - nginx

Is it possible to configure NGINX loadbalancer in least_conn mode to make exception for certain paths?
I want to configure loadbalancer in such way that all requests required for single login operation are sent to the same backend application instance.
I have frontend app accessing duplicated backend app via nginx load balancer. All apps are deployed on Tomcat 8.5 and backend instances have configured session replication between Tomcats.
My problem is that when user is authenticated using OAuth-2.0 authorization_code grant method, frontend app gets authorization code but due to conneting to backend through load balancer it tries to obtain token using this code from another machine resulting in InvalidGrantException.
Using ip_hash mode or it's variations isn't solution for this problem as it is unstable when application is accessed through VPN.

Yes you can achieve what you want by declaring two locations and treat them differently. See example below and check this question where it explains how the priority works.
http {
upstream myapp1 {
least_conn;
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://myapp1;
}
location /my-special-path/ {
proxy_pass http://srv1.example.com;
}
}
}
Above is a solution mainly based in your first statement that you want routing based on certain paths. If your problem is more complicated i.e these paths are dynamically created etc you can share an example to be easier to understand your specific situation.
UPDATE
Based on comment. I would really suggest to go troubleshoot your backend in order to be synced. That being said if you really want a solution for the exact problem from your nginx I would do the following:
On every response I would add a specific header which specific backend answered this request. add_header X-Upstream $upstream_addr;
On this specific path I would serve the request based on the value of that header. proxy_pass http://$http_x_upstream;
So the config would look like this:
http {
...
server {
...
location / {
add_header X-Upstream $upstream_addr always;
proxy_pass http://myapp1;
}
location /authorize/ {
add_header X-Upstream $upstream_addr always;
proxy_pass http://$http_x_upstream;
}
}
}
NOTE: Security. If you go down this path be careful that you are routing your requests based on a value that your client can manipulate. So be sure that you are at least validating this value. Check this answer for validating headers with nginx.

Related

nginx reverse proxy all request when location match

I am trying to use nginx to host 2 different web apps and make it work on the same domain (eg, www.myapp.com). I want to be able to detect a specific location and then load all resources from a specific app based on a location. Previously I had worked on proxying /api to a specific upstream, but the requirements are a little more complicated due to presence of other resources that needs to get loaded.
server {
listen 80;
location / {
proxy_pass http://app1;
}
location /app2/ {
proxy_pass http://app2;
}
}
upstream app1 {
server 127.0.0.1:8080;
}
upstream app2 {
server 127.0.0.1:3000;
}
When the url hits myapp.com/app2, it loads the html from app2 but there are other resources (like js, css, translations, analytics) files that are being searched in app1 (which obviously results in 404 NOT FOUND). Adding a location rule to match all css, js, etc does not make sense because both the app has their own cs, jss and other resource files. Adding a rule for hardcoded resources is also taxing beccause the it changes with every build.
The 2 main requirements are:
Load all resources from app2 when the url is www.myapp.com/app2, including all js, css and other resources. For instance, the js and css files are served at /static/js/*.js and /static/css/*.css, and they are being searched in https://app1/static/js/*.js, rather than app2. There could be additional resources in the future too so I want to remain flexible.
I don't want to rewrite the url and retain the same www.myapp.com. The url for the end-users should always remain at www.myapp.com/*, rather than using any subdomains. When using subdomains, it works as expected.

nginx proxy_pass to location returns 401 error

I have a WebDAV server set up, with its root folder /SomeVolume/webdav/contents and address 192.168.1.2:12345. User and password are set and the server can be accessed from a browser.
I am directing a domain name to the same machine using nginx, like this:
server {
server_name my.domain.me;
location / {
proxy_pass http://192.168.1.2:12345;
}
# plus the usual Certbot SSL stuff
This was working perfectly well, with HTTPS authentication and everything. I am using a third-party application that uses that server and it was working OK too.
I wanted to make this a bit more tidy and only changed couple of things:
WebDav server root to /SomeVolume/webdav (instead of /SomeVolume/webdav/contents), restarted the server.
proxy_pass http://192.168.1.2:12345 changed to proxy_pass http://192.168.1.2:12345/contents. Restarted ngninx.
Nothing else was modified.
I can still login through the browser, but the third-party application has stopped working because it gets authentication errors (401). Although if I try to login locally with http://192.168.1.2:12345/contents/ it works just fine.
What am I not understanding here? Is it some caching problem with the third-party application or have I misunderstood how location & proxy_pass work?
Thanks.

How to use fail2ban with Meteor on Nginx

I'm setting up a Digital Ocean droplet running Ubuntu 18.04 to host my Meteor 1.8 app via Phusion Passenger / Nginx. I will configure it to use SSL with Lets Encrypt.
fail2ban is a recommended tool to protect against brute force attacks, but I can't work out how to use it with Meteor, or even if it's appropriate. I've read several tutorials but there is something basic I don't understand.
I have used server location blocks in my Nginx config file to block access to all urls by default and only allow the necessary ones:
# deny all paths by default
location / { deny all; }
# allow sockjs
location /sockjs { }
# allow required paths
location = / { }
location /my-documents { }
location /login { }
location /register { }
...
# serve js and css
location ~* "^/[a-z0-9]{40}\.(css|js)$" {
root /var/www/myapp/bundle/programs/web.browser;
access_log off;
expires max;
}
# serve public folder
location ~ \.(jpg|jpeg|png|gif|mp3|ico|pdf|svg) {
root /var/www/myapp/bundle/pubilc;
access_log off;
expires max;
}
# deny unwanted requests
location ~ (\.php|.aspx|.asp|myadmin) {
return 404;
}
My basic question is: would fail2ban detect failed attempts to login to my Meteor app, and if so, how? If not, then what's the purpose of it? Is it looking for failed attempts to login to the server itself? I have disabled password access on the droplet - you can only connect to the server via ssh.
And how does this relate to Nginx password protection of sections of the site? Again, what's this for and do I need it? How would it work with a Meteor app?
Thank you for any help.
Any modern single page application using React/Vue/Blaze as its rendering engine simply doesn't send url requests to the server for each page in the UI.
Meteor loads all its assets at the initial page load, and the rest is done over sockets using DDP. It might load static assets as separate requests.
Any server API calls implemented as Meteor methods also won't show up in server logs.
So fail2ban will detect some brute force attacks, and could therefore be useful in blocking those attacks and preventing them from swamping the server, but it won't detect failed login attempts.
You could adapt the application to detect failed logins, and call the fail2ban API to log them (if that is possible). Otherwise I'm not sure whether it is totally appropriate for protecting a meteor server.
My conclusion is that yes, fail2ban is worth using with Meteor. As far as I can tell, Nginx password protection isn't relevant, but there's other good stuff you can do.
Firstly, I think it's worth using fail2ban on any server to block brute force attacks. My test server has been online only a couple of days with no links pointing to it and already I'm seeing probes to paths like wp-admin and robots.txt in the Nginx logs. These probes can't achieve anything because the files don't exist, but I think it's safer to ban repeated calls.
I worked from this tutorial to set up a jail for forbidden urls, modifying the jail definition to point to my actual Nginx log file.
Then, I've modified my app to record failed login attempts and written a custom jail and filter to block these. It may be that nobody will bother to write a script to attack a Meteor site specifically, and my Meteor app has throttling on the logins, but again I feel it's better to be more careful than less.
Here's how I've modified my app:
server/main.js
const buildServerLogText = ((text) => {
const connection = Meteor.call('auth.getClientConnection');
return `${moment(new Date()).format('YYYY/MM/DD HH:mm:ss')} ${text}, client: ${connection.clientAddress}, host: "${connection.httpHeaders.host}"`;
});
// log failed login attempts so fail2ban can find them in the Nginx logs
Accounts.onLoginFailure(() => {
const text = buildServerLogText('[error]: Meteor login failure');
console.log(text);
});
This writes failed login attempts to the server in this form:
2020/03/10 15:40:20 [error]: Meteor login failure, client: 86.180.254.102, host: "209.97.135.5"
The date format is important, fail2ban is fussy about this.
I also had to set passenger_disable_log_prefix on; in my Phusion Passenger config file to stop a prefix being added to the log entry. As I'm deploying my app with Phusion Passenger, the Nginx config is in the Passenger config file.
Then my fail2ban filter is like this:
/etc/fail2ban/filter.d/nginx-login-failure.conf
[Definition]
failregex = ^ \[error\]:.*Meteor login failure.*, client: <HOST>, .*$
ignoreregex =

nginx proxy all request through authentication service

Consider a dockerized environment containing the following containers:
Backend API
Front-end REACT App served using pushstate-server
Authentication Service
Nginx Container
My nginx.conf contains the following:
server {
listen 8080;
location / {
auth_request /auth;
proxy_pass http://frontend:5000;
}
location = /auth {
proxy_pass http://auth:6000;
}
error_page 403 = #error403;
location #error403 {
rewrite ^ /login$1;
proxy_pass http://frontend:5000;
}
}
When the auth_request /auth; line is commented out, everything works just fine and all frontend pages can be accessed.
As soon as I introduce the auth_request I can see the authentication service return a 403 however, it does not look like Nginx proxies to the login page.
What am I doing wrong?
There are two issues here:
Firstly, the authorization header is not forwarded to the authentication service. This was fixed with
location = /auth {
proxy_pass http://auth:6000;
proxy_pass_header Authorization;
}
Secondly, when a request is made to the frontend, nginx tries to authenticate with the auth container. As I am not authenticated, this fails and returns a 403. The nginx server then proxies to the login page on the REACT container, however, there are further request behind the scenes to retrieve css and js resources from the same container, for which the nginx gateway tries to authenticate. Again, as I am not authenticated retrieving these resources fails, so the page does not render.
A dirty solution was to add:
location /static/js/main.1e2389bc.js {
proxy_pass http://web:5000;
}
location /static/css/main.aa587518.css {
proxy_pass http://web:5000;
}
This retrieves the necessary files to render the login page with trying to authenticate. This is a bad solution as there may be other resources (favicon, other media etc.) so more blocks would need to be added. I am sure there is a simple solution using regex to sort this out in a simple way.
However, I ended up with a cleaner solution. Authenticate on requests to the backend API. This ensured that no sensitive information was displayed on the frontend without being authenticated and removed the hassle of hacking a solution to render the login page.

one sub-domain name for multiple services

I am using nginx proxy server and i don't want to use another sub-domain.
is it possible to redirect one domain name to multiple servers.
eg: my register domain name: user.example.com
and my app servers are : 192.168.0.1:7000 and 192.168.0.2:8000
i and looking to do is when i hit user.example.com redirects to 192.168.0.1:7000 and when i hit user.example.com/1 this will redirect to 192.168.0.2:8000
Yes, such a configuration might be implemented using NGINX as reverse proxy, it is described very well in official documentation, e.g. in "NGINX reverse proxy" guide. Basically it is just
location /some/path/ {
proxy_pass 192.168.0.1:7000;
}
location /another/path/ {
proxy_pass 192.168.0.2:8000;
}

Resources