Change nginx Root Directory at Scheduled Time - nginx

Let me quickly preface this by saying that I'm not a server administrator, so it's quite possible that I may be asking the "wrong" questions here.
I have a situation where we have a domain that will serve static files (HTML, images, etc.) that are configured and built by an existing, separate application. At certain scheduled datetimes, we will need the content of the sites to change to a different set of static files.
Since the files will all be prepared ahead of time, I was wondering if it was possible for nginx to be able to "switch" the root directory to direct traffic to the appropriate place based on these scheduled datetimes.
So if there were a series of directories maybe like this:
/www.example.com-20160701000000/content/public
/www.example.com-20160708000000/content/public
/www.example.com-20160801120000/content/public
And then the configuration would say that from 1 July 2016 00:00:00 through 7 July 2016 23:59:59, the site root for www.example.com would be /www.example.com-20160701000000/content/public, and so forth.
Some other things I've looked into:
Some form of middleware like PHP, but I want to avoid this for portability.
SSI doesn't really seem like an option. It seems like I'd have one root directory and in the index.html, the contents would be something like <!--# include file="www.example.com-$datestamp/content/public/index.html" --> but it seems like I'd have to do this for every page maybe? I'm also not sure how it would work if the page names are different between versions.
A cron job or something else that either moves files or edits a file at the appropriate time, this just seems like a really bad potential failure point.
So tl;dr, can nginx be configured in some way to have root directories that are active for a domain at different scheduled times? Or is there a better approach to this problem that I'm not aware of?

nginx has variables called $time_iso8601 and $time_local which you could use to construct a dynamic root. See this document for details.
One approach would be to construct your rules as a map and set the root directive appropriately using the mapped variable or named captures. See this document for details.
I tested the concept using this:
map $time_iso8601 $root {
default /usr/local/www/test;
~^2016-06-2[0-9] /usr/local/www/test/test-20160625;
}
server {
root $root;
...
}

Related

nginx SSI module; disable file include?

I am looking at the NGINX SSI module and I am wondering if there is a way to block the "file" option on SSI.
http://nginx.org/en/docs/http/ngx_http_ssi_module.html
So that someone does not do this
<!--# include file="/etc/passwd" -->
I wasn't able to find much on security in regards to include file, does anyone know anything about this?
First of all, the only way you can be completely certain that this won't happen is to run nginx as a non-root user (there are many other reasons to do so, and I'm sure you are doing so already).
Another thing to consider is that SSIs should generally be treated as privileged code, just as CGI scripts are. You should not generally allow them from untrusted users.
That said, the answer to your question is that nginx processes (source code) the SSI include directive by treating the file and uri options identically and passing them to ngx_http_subrequest. This is essentially the same as serving a file request for the given file, in particular in that the name is resolved relative to the root directive currently in effect. So there are still some security considerations, but in general it's much safer than it would be were the SSI parser to simply open and read the file on it's own.

Managing folder locations when swapping domains

I have recently been working an application that will soon need to be moved from one server to another. (Testing environment to live environment). The problem I am having is how can I make it so when I move folders, it will still work without need needing to change directories.
Example -
<?php include $_SERVER['DOCUMENT_ROOT'] . '/arcade/layouts/_header.php'; ?>
Here I include a layout file called '_header.php', the problem I will have when I move the test environment to the live is that we will need have the folder '/arcade' so this will be looking for a folder that doesn't exist. I could use ../ or ./ but then I couldn't be able to use $_SERVER['DOCUMENT_ROOT'].
My initial thought was to have a _config.php and in it have global variables such as
$root = "/arcade";
Then when I move from the test to the live I just have to change 1 value from "/arcade" to "" and possibly the directory to the config file.
Just looking for some insight for managing folders and files across domains
Having a configuration file is desirable anyway. Anything that is a parameter of the application (locations, access parameters etc, feature toggles, other flags etc.) should be defined there.
What is even better if you have a local configuration as well. In the configuration you can check if a local configuration exists (say config.local.php) and include it if it does at the end. This way you can have a minimal, possibly unversioned, environment specific override for settings that you need.

Accessing an environment variable across nginx w/ Lua and Rails

I'm implementing something like this to let one service allow access to separate upstream service in nginx.
Briefly: A Rails app sets an HMAC cookie, which is then checked by some Lua code thanks to an access_by_lua directive in nginx.
To generate and verify the cookie, both Rails and nginx-Lua must of course share a secret key. I've tried setting this up as an environment variable in /etc/environment.
To make the var available in Rails, I had to fiddle with Unicorn's init script a bit. But at least that script is contained within the project, and just symlinked into place.
Meanwhile, to get at the variable in Lua, I do something like this: os.getenv("MY_HMAC_SECRET"). But in order for Lua to have access to that when running under nginx, it must first be listed using the env directive in the main nginx config.
So now, I'm feeling like my configuration is being spread out all over the place:
in /etc/environment (outside my project)
in /etc/nginx/nginx.conf (outside my project)
in unicorn's init script
in my site's nginx vhost config
It's starting to seem a little ridiculous just to make a simple string accessible in multiple places...
Is there a simpler way to do this? Honestly, the easiest way I can think of is hardcode it in the 2 places I need it, and be done. But that sounds nasty.
Better to put it only in the two places it's actually needed, in the two respective configuration files, than in the global environment where every process has access to it, as you have it now.
I would use init_by_lua directive in your vhost config.
init_by_lua 'HMAC_SECRET = "SECRET-STRING"';
server {
# and so on
}
So you'll have you secret in two places, but both in your project (if I understand correctly and vhost config is in your project).
You can even use init_by_lua_file and make some efforts to read and parse that file in your unicorn init.

Running several sites off the same code except a config

I have the same code which will be used for several sites. In the Nginx config I wanted to have all the sites point to the same code folder.
I think this should work. The only catch is that I want each site to use a different config file.
How could something like this be achieved? Surely I wouldn't need to duplicate all the websites code just to have each one have a different config?
What language are you scripting in? Most languages will have a way to examine the incoming request. From this you could extract the domain name from the request and base which conf file you load based on the name using an if or switch statement.
You could also use a get variable for example www.domain.com/index.html?conf=conf1.conf. Then in your controller you'd need to look at that git variable to determine which conf file to load.
Either of these solutions should be easy to find in the docs for you scripting language.

Interacting With Folders Outside The Root/Web Directory With Dreamweaver (CS5)

Using FileZilla, I can access folders that are outside my web directory. How can I do the same with Dreamweaver so that I can edit the files and automatically save/upload all through Dreamweaver? I currently can only access the web directory.
I know how to include them with PHP, but I would like Dreamweaver to find/access them.
Thank you!
You would have to set the Site Definition (both local and remote) paths to look one level higher than you currently have it. So if the local path is
My Documents/Web Sites/This Site
you would change it to
My Documents/Web Sites/
and if the remote is:
/user/home/domain.com/
change to
/user/home/
The problem you are going to run into is that Dreamweaver doesn't work well when set like this. It assumes the Remote path is the public web root and will create all sorts of files and folders there automatically and DW expects those to be in the public root. Also, things like setting paths to includes and images automatically will start to not work as all paths will start outside of the public web root.
Best to leave it as it is and use an external FTP program to handle the files outside of the web site.
We've bumped up against this situation previously where the desire was to have the PHP include files be moved outside the public HTML directory. JCL1178's answer is absolutely conceptually correct.
The actual implementation was to duplicate the site (under "Manage Sites") and essentially create a separate site for the "includes" directory that would go one level up. So the "Root Directory" setting was normal (in our case "public_html/" in the main site and we removed "public_html/" from the Root Directory setting in the "includes" site, effectively causing the path to go one level up.
Definitely not an ideal situation/workflow, to say the least, as you'll end up with two site definitions for one site (which can cause other issues); but Dreamweaver is what it is. We were working on a project offsite that did not allow for anything other than Dreamweaver to be used, so this is what we came up with to comply.
As an added note: we were only able to implement this solution because the webhosting plan allowed us to get to the root. If you're on a webhosting plan that is strictly limited to the public directory, the whole thing will be DOA.

Resources