Varnish probes and logs - nginx

I have a LNMP stack with Varnish in front.
I have a probe with Varnish and it checks every seconds if the site is running.
It works good but I don't want to log those probes.
Does someone know please how to disable only that log?
Thanks

In your nginx.conf put the following inside http { ... } block:
map "$request_method:$request_uri:$remote_addr" $loggable {
"HEAD:/:127.0.0.1" 0;
default 1;
}
Find your access_log directive and add the if condition to it like so:
access_log /path/to/access.log combined if=$loggable;
What this does, is logs requests conditionally: a HEAD request to / made by localhost, will not be logged. Everything else is logged as usual.
Naturally, you will have to adjust "HEAD:/:127.0.0.1" if your probe uses different request method, resource or if Varnish is not on the same machine, e.g. "GET:/healthcheck:1.2.3.4" will not log GET requests to /healthcheck by 1.2.3.4.

Related

Nginx: How do I pass Origin header through proxy?

Currently I have a service setup at https://example.com that, as part of its standard logging setup, logs the request origin. This is a public data API, it's open to any and every origin.
This service used to be at https://example_2.com, but I proxy that address to the new to ensure non-breaking service for everyone else. This is done in the following way:
server {
...
server_name example_2.com;
location / {
proxy_pass https://example.com;
add_header Access-Control-Allow-Origin *;
}
}
The problem is that the Origin header turns up as null at the proxy destination. I need the header to arrive intact so I can know where the request came from.
I tried adding proxy_pass_request_headers but that seemingly does nothing at all.
While I haven't fixed passing Origin through, I did learn that Nginx automatically populates Referer which will work for my purpose. It's not great, and I'd love to know how to get a non-null Origin, but I thought I'd post this as it might help others who similarly can get away with this as a "solution".

How to use fail2ban with Meteor on Nginx

I'm setting up a Digital Ocean droplet running Ubuntu 18.04 to host my Meteor 1.8 app via Phusion Passenger / Nginx. I will configure it to use SSL with Lets Encrypt.
fail2ban is a recommended tool to protect against brute force attacks, but I can't work out how to use it with Meteor, or even if it's appropriate. I've read several tutorials but there is something basic I don't understand.
I have used server location blocks in my Nginx config file to block access to all urls by default and only allow the necessary ones:
# deny all paths by default
location / { deny all; }
# allow sockjs
location /sockjs { }
# allow required paths
location = / { }
location /my-documents { }
location /login { }
location /register { }
...
# serve js and css
location ~* "^/[a-z0-9]{40}\.(css|js)$" {
root /var/www/myapp/bundle/programs/web.browser;
access_log off;
expires max;
}
# serve public folder
location ~ \.(jpg|jpeg|png|gif|mp3|ico|pdf|svg) {
root /var/www/myapp/bundle/pubilc;
access_log off;
expires max;
}
# deny unwanted requests
location ~ (\.php|.aspx|.asp|myadmin) {
return 404;
}
My basic question is: would fail2ban detect failed attempts to login to my Meteor app, and if so, how? If not, then what's the purpose of it? Is it looking for failed attempts to login to the server itself? I have disabled password access on the droplet - you can only connect to the server via ssh.
And how does this relate to Nginx password protection of sections of the site? Again, what's this for and do I need it? How would it work with a Meteor app?
Thank you for any help.
Any modern single page application using React/Vue/Blaze as its rendering engine simply doesn't send url requests to the server for each page in the UI.
Meteor loads all its assets at the initial page load, and the rest is done over sockets using DDP. It might load static assets as separate requests.
Any server API calls implemented as Meteor methods also won't show up in server logs.
So fail2ban will detect some brute force attacks, and could therefore be useful in blocking those attacks and preventing them from swamping the server, but it won't detect failed login attempts.
You could adapt the application to detect failed logins, and call the fail2ban API to log them (if that is possible). Otherwise I'm not sure whether it is totally appropriate for protecting a meteor server.
My conclusion is that yes, fail2ban is worth using with Meteor. As far as I can tell, Nginx password protection isn't relevant, but there's other good stuff you can do.
Firstly, I think it's worth using fail2ban on any server to block brute force attacks. My test server has been online only a couple of days with no links pointing to it and already I'm seeing probes to paths like wp-admin and robots.txt in the Nginx logs. These probes can't achieve anything because the files don't exist, but I think it's safer to ban repeated calls.
I worked from this tutorial to set up a jail for forbidden urls, modifying the jail definition to point to my actual Nginx log file.
Then, I've modified my app to record failed login attempts and written a custom jail and filter to block these. It may be that nobody will bother to write a script to attack a Meteor site specifically, and my Meteor app has throttling on the logins, but again I feel it's better to be more careful than less.
Here's how I've modified my app:
server/main.js
const buildServerLogText = ((text) => {
const connection = Meteor.call('auth.getClientConnection');
return `${moment(new Date()).format('YYYY/MM/DD HH:mm:ss')} ${text}, client: ${connection.clientAddress}, host: "${connection.httpHeaders.host}"`;
});
// log failed login attempts so fail2ban can find them in the Nginx logs
Accounts.onLoginFailure(() => {
const text = buildServerLogText('[error]: Meteor login failure');
console.log(text);
});
This writes failed login attempts to the server in this form:
2020/03/10 15:40:20 [error]: Meteor login failure, client: 86.180.254.102, host: "209.97.135.5"
The date format is important, fail2ban is fussy about this.
I also had to set passenger_disable_log_prefix on; in my Phusion Passenger config file to stop a prefix being added to the log entry. As I'm deploying my app with Phusion Passenger, the Nginx config is in the Passenger config file.
Then my fail2ban filter is like this:
/etc/fail2ban/filter.d/nginx-login-failure.conf
[Definition]
failregex = ^ \[error\]:.*Meteor login failure.*, client: <HOST>, .*$
ignoreregex =

How to set exceptions for NGINX load balancer

Is it possible to configure NGINX loadbalancer in least_conn mode to make exception for certain paths?
I want to configure loadbalancer in such way that all requests required for single login operation are sent to the same backend application instance.
I have frontend app accessing duplicated backend app via nginx load balancer. All apps are deployed on Tomcat 8.5 and backend instances have configured session replication between Tomcats.
My problem is that when user is authenticated using OAuth-2.0 authorization_code grant method, frontend app gets authorization code but due to conneting to backend through load balancer it tries to obtain token using this code from another machine resulting in InvalidGrantException.
Using ip_hash mode or it's variations isn't solution for this problem as it is unstable when application is accessed through VPN.
Yes you can achieve what you want by declaring two locations and treat them differently. See example below and check this question where it explains how the priority works.
http {
upstream myapp1 {
least_conn;
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://myapp1;
}
location /my-special-path/ {
proxy_pass http://srv1.example.com;
}
}
}
Above is a solution mainly based in your first statement that you want routing based on certain paths. If your problem is more complicated i.e these paths are dynamically created etc you can share an example to be easier to understand your specific situation.
UPDATE
Based on comment. I would really suggest to go troubleshoot your backend in order to be synced. That being said if you really want a solution for the exact problem from your nginx I would do the following:
On every response I would add a specific header which specific backend answered this request. add_header X-Upstream $upstream_addr;
On this specific path I would serve the request based on the value of that header. proxy_pass http://$http_x_upstream;
So the config would look like this:
http {
...
server {
...
location / {
add_header X-Upstream $upstream_addr always;
proxy_pass http://myapp1;
}
location /authorize/ {
add_header X-Upstream $upstream_addr always;
proxy_pass http://$http_x_upstream;
}
}
}
NOTE: Security. If you go down this path be careful that you are routing your requests based on a value that your client can manipulate. So be sure that you are at least validating this value. Check this answer for validating headers with nginx.

how to write dynamic route in nginx location proxy config

for example I want anyone hits localhost/api to be redirected to a proxy of 127.0.0.1/api and whatever after it, for example, if I got localhost/api/getMyName then the config redirect it to 127.0.0.1/api/getMyName. or if someone hits localhost/api/getSomeone/1, it will proxy to 127.0.0.1/api/getSomeone/1
I tried something like
location /api {
proxy_pass http://127.0.0.1/api;
}
But the nginx just not responding at or, and adding /* or * after them just do not do the work... what should it actually be written to match the scenario I want above?

How do I reverse proxy the homepage/root using nginx?

I'm interested in sending folks who go to the root/homepage of my site to another server.
If they go anywhere else (/news or /contact or /hi.html or any of the dozens of other pages) they get proxied to a different server.
Since "/" is the nginx catchall to send anything that's not defined to a particular server, and since "/" also represents the homepage, you can see my predicament.
Essentially the root/homepage is its own server. Everything else is on a different server.
Thoughts?
location =/ {
# only "/" requests
}
location / {
# everything else
}
More information: http://nginx.org/en/docs/http/ngx_http_core_module.html#location

Resources