Accessing an environment variable across nginx w/ Lua and Rails - nginx

I'm implementing something like this to let one service allow access to separate upstream service in nginx.
Briefly: A Rails app sets an HMAC cookie, which is then checked by some Lua code thanks to an access_by_lua directive in nginx.
To generate and verify the cookie, both Rails and nginx-Lua must of course share a secret key. I've tried setting this up as an environment variable in /etc/environment.
To make the var available in Rails, I had to fiddle with Unicorn's init script a bit. But at least that script is contained within the project, and just symlinked into place.
Meanwhile, to get at the variable in Lua, I do something like this: os.getenv("MY_HMAC_SECRET"). But in order for Lua to have access to that when running under nginx, it must first be listed using the env directive in the main nginx config.
So now, I'm feeling like my configuration is being spread out all over the place:
in /etc/environment (outside my project)
in /etc/nginx/nginx.conf (outside my project)
in unicorn's init script
in my site's nginx vhost config
It's starting to seem a little ridiculous just to make a simple string accessible in multiple places...
Is there a simpler way to do this? Honestly, the easiest way I can think of is hardcode it in the 2 places I need it, and be done. But that sounds nasty.

Better to put it only in the two places it's actually needed, in the two respective configuration files, than in the global environment where every process has access to it, as you have it now.

I would use init_by_lua directive in your vhost config.
init_by_lua 'HMAC_SECRET = "SECRET-STRING"';
server {
# and so on
}
So you'll have you secret in two places, but both in your project (if I understand correctly and vhost config is in your project).
You can even use init_by_lua_file and make some efforts to read and parse that file in your unicorn init.

Related

Change nginx Root Directory at Scheduled Time

Let me quickly preface this by saying that I'm not a server administrator, so it's quite possible that I may be asking the "wrong" questions here.
I have a situation where we have a domain that will serve static files (HTML, images, etc.) that are configured and built by an existing, separate application. At certain scheduled datetimes, we will need the content of the sites to change to a different set of static files.
Since the files will all be prepared ahead of time, I was wondering if it was possible for nginx to be able to "switch" the root directory to direct traffic to the appropriate place based on these scheduled datetimes.
So if there were a series of directories maybe like this:
/www.example.com-20160701000000/content/public
/www.example.com-20160708000000/content/public
/www.example.com-20160801120000/content/public
And then the configuration would say that from 1 July 2016 00:00:00 through 7 July 2016 23:59:59, the site root for www.example.com would be /www.example.com-20160701000000/content/public, and so forth.
Some other things I've looked into:
Some form of middleware like PHP, but I want to avoid this for portability.
SSI doesn't really seem like an option. It seems like I'd have one root directory and in the index.html, the contents would be something like <!--# include file="www.example.com-$datestamp/content/public/index.html" --> but it seems like I'd have to do this for every page maybe? I'm also not sure how it would work if the page names are different between versions.
A cron job or something else that either moves files or edits a file at the appropriate time, this just seems like a really bad potential failure point.
So tl;dr, can nginx be configured in some way to have root directories that are active for a domain at different scheduled times? Or is there a better approach to this problem that I'm not aware of?
nginx has variables called $time_iso8601 and $time_local which you could use to construct a dynamic root. See this document for details.
One approach would be to construct your rules as a map and set the root directive appropriately using the mapped variable or named captures. See this document for details.
I tested the concept using this:
map $time_iso8601 $root {
default /usr/local/www/test;
~^2016-06-2[0-9] /usr/local/www/test/test-20160625;
}
server {
root $root;
...
}

symfony2 access test environment by subdomain?

I'm looking for a way to expose my test environment via a subdomain. Basically, I want to do the equivalent of the console --env "test" via URL. So someone accessing http://example.com will get the production site, but the external testers can go to http://test.example.com and will get the test environment, with test database and everything.
I thought just using SetEnv ENV "test" in my apache config would do the trick, but apparently it doesn't.
I'm fairly sure this is a pretty common thing, so can someone guide me to the solution?
It's really weird that you're trying to access test environment through url, but I'll guess you need some special configuration for WebTestCases.
You need to create app_test.php file in web directory, and boot kernel with 'test' environment parameter. To see how to do it, check out already available app.php and app_dev.php files.
After that, setup your apache to aim for app_test.php when you hit your url. Also have in mind that you will probably have to make apache ignore .htaccess because it will point it to app.php. You can do it using AllowOverride None.

Running several sites off the same code except a config

I have the same code which will be used for several sites. In the Nginx config I wanted to have all the sites point to the same code folder.
I think this should work. The only catch is that I want each site to use a different config file.
How could something like this be achieved? Surely I wouldn't need to duplicate all the websites code just to have each one have a different config?
What language are you scripting in? Most languages will have a way to examine the incoming request. From this you could extract the domain name from the request and base which conf file you load based on the name using an if or switch statement.
You could also use a get variable for example www.domain.com/index.html?conf=conf1.conf. Then in your controller you'd need to look at that git variable to determine which conf file to load.
Either of these solutions should be easy to find in the docs for you scripting language.

Local Coldfusion server for multiple domains / URLs

I want to get CF9 with IIS 7 setup locally to run with multiple domains.
I have read this one but it doesn't say anything about the actual setup.
Need help with multiple URL setup on local CF9/Jrun install
I setup IIS so that I can start 127.0.0.1/domain1/index.cfm The page loads properly
but all subsequent links fail with
Could not find the included template: /_/definesession.cfm
But I see the file when typing in file:///C:/InetPub/wwwroot/domain1/_/DefineSession.cfm
The files are there but apparently the server is only reading the directory correctly
If I test http://127.0.0.1/domain1/_/BrowserDetect.cfm with no includes just a self contained file it executes properly.
The path in IIS is set to C:\InetPub\wwwroot\domain1
The bindings hostname is just domain1 no TLD
Also the second instance 127.0.0.1/domain2/index.cfm is working correctly. And here as well including subdirectories is failing.
ADDITIONAL NOTES: (added 1/3/12)
I guess it has to do with the CF mapping. I now moved the code to c:\coldfusion9\wwwroot\domain1_... and it sort of works.
In other words I start the program here: C:\inetpub\wwwroot\domain1\index.cfm Inside that index is for instance
But it executes the file located here: c:\coldfusion9\wwwroot\domain1_\definesession.cfm Just couldn't find anything in the web about mapping a local CF9 to that situation. Any idea??? –
You might have a ColdFusion mapping for "/" that needs to be adjusted.
OK I fixed it. There were multiple issues:
For whatever reason there were some issues with IIS and I had to reinstall it.
I had to make sure 9.0.1 was installed
I had to run Webserver Configuration Tool multiple times to actually get the Handler Mappings in order.
http://127.0.0.1/domain1/ was wrong - it must be http://domain1/ etc.
I forgot to add the domains to the host file on the machine - stupid me
I had to redesign my mapping to avoid overlaps between domains (i.e. mapping CFCs to /_/cfc/ on all domains needed to have different mapping names.
Now I have several different domains on my local machine and they work just fine.

Add dynamic response headers from a file

This is how I'm adding a static response header in my nginx.conf:
location /some-path/ {
add_header X-Some-Static-Header "some static value";
}
Is there a way to add a response header with a dynamic value? This value should be pulled in from a file, or an environment variable, or some similar external place.
I'm trying to add a "X-App-Version" header, which is to be read from a file. When a new version of the web application is deployed, this file will be updated with the new version number. Preferably, nginx should immediately start serving up the new version number, without a restart/reload.
How can this be done?
It doesn't look like there's a way to do this without simply changing the config file when you update the version number. That said, what you're asking for shouldn't be too difficult to automate if you can live with a restart/reload.
If you're using git, (or really any VCS,) you could use commit hooks to trigger a simple shell script to find and replace the line in the config file, run nginx -t -c /etc/nginx/nginx.conf, and restart the server.
I wish there was an existing NGINX module to do what you're asking, so I'm putting that on my todo list, but for most use cases this should probably be a reasonably acceptable hack.

Resources