Disable symfony application during maintenance - symfony

I'm looking for any way to disabled a Symfony application during maintenance support. I mean, in a very simple way:
1) I have a application where people can enter to see the info of the database.
2) The admin could change the info of the database. During this period of time, the database info should not be accesible because it has been deleting and updating.
3) What I want is, if there is any way to block the application during this maintenance period and redirect users (not the admin user) to a maintenance notice page.
I remember there was a global function which redirect all urls, but I don't remember very well.
During the maintenance period I could stablish a param in the Database (or in any other way), and ask for this value to know if the application is in maintenance period or not, to redirect to the normal url o redirect to the notive maintenance page.

If you store a param in the database to know if the admin is updating data then its fairly simple to use a kernel request listener:
Test on the database value
If Admin is the current user
see here : Events and event listeners

You can do this without changing any of the project code, directly in the webserver config. The following works on NGINX (but use with caution, because if is evil) and it should be no problem to reproduce it for Apache.
location / {
# if the user is not you / an admin
if ($remote_addr != "your.ip.address") {
# return http code 503 (The server is currently unavailable (because it is overloaded or down for maintenance).)
return 503;
}
# otherwise, go ahead as usual
try_files $uri $uri/ /app.php?$is_args$args;
}
# show specific page when http code 503
error_page 503 #maintenance;
location #maintenance {
# which is maintenance.html
rewrite ^(.*)$ /maintenance.html break;
}
IP check was a quick solution in my case. You could also check for specific cookies, etc. The idea stays the same: the application is independent of this.

Related

How to proxy between 2 apps containing shared paths in a DRY way

I'm a newbie with Ngnix and I am looking for some advice to avoid repeating location blocks and preserve functionality.
I used to have one react application react.mydomain.cc
On my Nginx configuration file I was proxing everything from / to react.mydomain.cc
location / {
try_files $uri #approute;
}
location #approute {
proxy_ssl_server_name on;
set $react "http://react.mydomain.cc";
proxy_pass $react$request_uri;
}
Now, I want to replace part of the old application with a new one without having to make changes to the old.
The logic would be.
If the users goes to www.mydomain.cc he should be proxied to the new app http://new-react.mydomain.cc
The same other paths like:
/about
/contact
/blog
/whoiam
/photos
and a few more
These pages are also active through the other subdomain http://react.mydomain.cc/about but not accessible through nginx domain, www.mydomain.cc
If the user goes to
/notes
/playground
/app/*
/internal/*
he should be proxied to the old app.
Example: the user goes to www.mydomain.cc/notes and he is proxied to http://react.mydomain.cc/notes. Then he click on the link /about and he is proxied to the new app http://new-react.mydomain.cc/about even when the old app has /about.
Can anyone help me to avoid having to repeat 20 times location blocks? I'm trying to achieve the same but in a cleaner way.
Please, let me know if edition is needed to clarify. Remember I am new.

Nginx Config, Deny Image Stored in Specific Folder If $http_cookie Isn't Set

I am currently launching a WordPress site that moves image uploads into a certain folder when they are added. On my development server I have made it so that images stored in this folder are NOT ACCESSIBLE, unless a specific $http_cookie is set in the browser. Here is the location block I'm using for this in my development NGINX config:
location ~ ^/wp-content/uploads/employee_message/(.*) {
if ($http_cookie !~ 'wp_2651267=user_employee123') {
return 301 https://sitename.com;
}
}
On the development server, when I view a file such as http://sitename.com/wp-content/uploads/employee_message/1234-5678-1234-5678/image_here.png for example, it will only allow me to view that if the I have the wp_2651267=user_employee123 cookie set. This is good.
However, when I move this location block into my production config (I'm using RunCloud) it allows the image to be viewed with or without the cookie. This is no good.
I'm seeing that this location block below is part of the default config, and my block above gets pulled in AFTER this one:
location ~ .(ico|css|gif|jpe?g|png|gz|zip|flv|rar|wmv|avi|css|js|swf|png|htc|mpeg|mpg|txt|otf|ttf|eot|woff|woff2|svg|webp)$ {
expires 1M;
include /etc/nginx-rc/conf.d/sitename.d/headers.conf;
add_header Cache-Control "public";
include /etc/nginx-rc/extra.d/sitename.location.static.*.conf;
try_files $uri $uri/ /index.php$is_args$args;
}
Is it possible that this is undoing the cookie business I'm adding in?
Here is an example config that RunCloud uses: RunCloud NGINX Config
My location block gets pulled in on this line:
include /etc/nginx-rc/extra.d/runcloud-blog.location.main.*.conf;
There are no errors when I run a test, and it has definitely been reloaded many, many times. Are there any reasons that my location block isn't working in this setup? Is there more information I can provide to help troubleshoot this?
Thanks so much for taking the time to read this! Please let me know if you have any insights.
Thanks,
-Ryan
To help people that find this question in future
Nginx then tries to match against the regular expression locations sequentially. The first regular expression location that matches the request URI is immediately selected to serve the request.
via Understanding Nginx Server and Location Block Selection Algorithms
Per the question, the less restrictive regex location was declared BEFORE the more restrictive location so it was selected as the location to serve the request.
By moving the more restrictive location BEFORE the other will cause it to be selected when the regex matches.

How to use fail2ban with Meteor on Nginx

I'm setting up a Digital Ocean droplet running Ubuntu 18.04 to host my Meteor 1.8 app via Phusion Passenger / Nginx. I will configure it to use SSL with Lets Encrypt.
fail2ban is a recommended tool to protect against brute force attacks, but I can't work out how to use it with Meteor, or even if it's appropriate. I've read several tutorials but there is something basic I don't understand.
I have used server location blocks in my Nginx config file to block access to all urls by default and only allow the necessary ones:
# deny all paths by default
location / { deny all; }
# allow sockjs
location /sockjs { }
# allow required paths
location = / { }
location /my-documents { }
location /login { }
location /register { }
...
# serve js and css
location ~* "^/[a-z0-9]{40}\.(css|js)$" {
root /var/www/myapp/bundle/programs/web.browser;
access_log off;
expires max;
}
# serve public folder
location ~ \.(jpg|jpeg|png|gif|mp3|ico|pdf|svg) {
root /var/www/myapp/bundle/pubilc;
access_log off;
expires max;
}
# deny unwanted requests
location ~ (\.php|.aspx|.asp|myadmin) {
return 404;
}
My basic question is: would fail2ban detect failed attempts to login to my Meteor app, and if so, how? If not, then what's the purpose of it? Is it looking for failed attempts to login to the server itself? I have disabled password access on the droplet - you can only connect to the server via ssh.
And how does this relate to Nginx password protection of sections of the site? Again, what's this for and do I need it? How would it work with a Meteor app?
Thank you for any help.
Any modern single page application using React/Vue/Blaze as its rendering engine simply doesn't send url requests to the server for each page in the UI.
Meteor loads all its assets at the initial page load, and the rest is done over sockets using DDP. It might load static assets as separate requests.
Any server API calls implemented as Meteor methods also won't show up in server logs.
So fail2ban will detect some brute force attacks, and could therefore be useful in blocking those attacks and preventing them from swamping the server, but it won't detect failed login attempts.
You could adapt the application to detect failed logins, and call the fail2ban API to log them (if that is possible). Otherwise I'm not sure whether it is totally appropriate for protecting a meteor server.
My conclusion is that yes, fail2ban is worth using with Meteor. As far as I can tell, Nginx password protection isn't relevant, but there's other good stuff you can do.
Firstly, I think it's worth using fail2ban on any server to block brute force attacks. My test server has been online only a couple of days with no links pointing to it and already I'm seeing probes to paths like wp-admin and robots.txt in the Nginx logs. These probes can't achieve anything because the files don't exist, but I think it's safer to ban repeated calls.
I worked from this tutorial to set up a jail for forbidden urls, modifying the jail definition to point to my actual Nginx log file.
Then, I've modified my app to record failed login attempts and written a custom jail and filter to block these. It may be that nobody will bother to write a script to attack a Meteor site specifically, and my Meteor app has throttling on the logins, but again I feel it's better to be more careful than less.
Here's how I've modified my app:
server/main.js
const buildServerLogText = ((text) => {
const connection = Meteor.call('auth.getClientConnection');
return `${moment(new Date()).format('YYYY/MM/DD HH:mm:ss')} ${text}, client: ${connection.clientAddress}, host: "${connection.httpHeaders.host}"`;
});
// log failed login attempts so fail2ban can find them in the Nginx logs
Accounts.onLoginFailure(() => {
const text = buildServerLogText('[error]: Meteor login failure');
console.log(text);
});
This writes failed login attempts to the server in this form:
2020/03/10 15:40:20 [error]: Meteor login failure, client: 86.180.254.102, host: "209.97.135.5"
The date format is important, fail2ban is fussy about this.
I also had to set passenger_disable_log_prefix on; in my Phusion Passenger config file to stop a prefix being added to the log entry. As I'm deploying my app with Phusion Passenger, the Nginx config is in the Passenger config file.
Then my fail2ban filter is like this:
/etc/fail2ban/filter.d/nginx-login-failure.conf
[Definition]
failregex = ^ \[error\]:.*Meteor login failure.*, client: <HOST>, .*$
ignoreregex =

Sending extra header in nginx rewrite

Right now, I am migrating the domain of my app from app.example.com to app.newexample.com using the following nginx config:
server {
server_name app.example.com;
location /app/ {
rewrite ^/app/(.*)$ http://app.newexample.com/$1;
}
}
I need to show-up a popup-banner to notify the user of the domain name migration.
And I want to this based upon the referrer or some-kind-of-other-header at app.newexample.com
But how can I attach an extra header on the above rewrite so that the javascript would detect that header and show the banner only when that header is present coz the user going directly at app.newexample.com should not see that popup-banner?
The thing is that, when you "rewrite" into URI having protocol and hostname (that is http://app.newexample.com/ in your case), Nginx issues fair HTTP redirect (I guess the code will be 301 aka "permanent redirect"). This leaves you only two mechanisms to transfer any information to the handler of new URL:
cookie
URL itself
Since you are redirecting users to the new domain, cookie is no-go. But even in the case of a common domain I would choose URL to transfer this kind of information, like
server_name app.example.com;
location /app/ {
rewrite ^/app/(.*)$ http://app.newexample.com/$1?from_old=yes;
}
This gives you the freedom to process at either Nginx or in a browser (using JavaScript). You may even do what you wanted intially, issuing a special HTTP header for JavaScript in new app server Nginx configuration:
server_name app.newexample.com;
location /app {
if ($arg_from_old) {
add_header X-From-Old-Site yes;
}
}
A similar problem was discussed here. You can try to use a third-party module HttpHeadersMore (I didn't try it myself). But even if it does not work at all, with the help of this module you can do absolutely everything. Example is here.
Your redirect is missing one thing, the redirect type/code, you should add permanent at the end of your rewrite line, I'm not sure what's the default redirect code if not explicitly mentioned.
rewrite ^/app/(.*)$ http://app.newexample.com/$1 permanent;
An even better way is using return
location /app {
return 301 $scheme://app.newexample.com$request_uri;
}
Adding a get parameter as mentioned above would also be a reliable way to do it, you can easily set a session ( flash ) and redirect again to the page it self but after removing the appended get parameter.
EDIT:
Redirecting doesn't send referrer header, if the old domain is still working you could put a simple php file that does the redirect with a header call.
header("Location: http://app.newexample.com")
One possible solution without any headers would be to check the document.referrer property:
if (document.referrer.indexOf("http://app.example.com") === 0) {
alert("We moved!");
}
Using a 301 will set the referrer to the old page. If the referrer doesn't start with the old page url, it was not directed by that page. Maybe a bit quick n dirty, but should work.

nginx multi-stage 404 handling

We just moved to a new site, and want to redirect old links where necessary - however, some still work. For instance,
/holidays/sku.html
still works, while
/holidays/christmas/
no longer works. I'd like to be able to allow the site to attempt to serve a page, and when a 404 is reached, THEN try to pass it through a series of regex redirects, that may look like:
location ~* /holidays/(.*)+$ { set $args ""; rewrite ^ /holidays.html?r=1 redirect; }
I'm using a ~* location directive instead of doing a direct rewrite because we're moving from a Windows-based ASPX site to Magento with php-fpm behind nginx, so we suddenly have to worry about case sensitivity.
Without using nested location directives (which are actively discouraged by nginx documentation) with an #handler of some sort, what's the best way to allow nginx to attempt to serve the page first, THEN pass it across redirects if it fails?
Thanks!
http://wiki.nginx.org/NginxHttpCoreModule#try_files

Resources