I am looking at the NGINX SSI module and I am wondering if there is a way to block the "file" option on SSI.
http://nginx.org/en/docs/http/ngx_http_ssi_module.html
So that someone does not do this
<!--# include file="/etc/passwd" -->
I wasn't able to find much on security in regards to include file, does anyone know anything about this?
First of all, the only way you can be completely certain that this won't happen is to run nginx as a non-root user (there are many other reasons to do so, and I'm sure you are doing so already).
Another thing to consider is that SSIs should generally be treated as privileged code, just as CGI scripts are. You should not generally allow them from untrusted users.
That said, the answer to your question is that nginx processes (source code) the SSI include directive by treating the file and uri options identically and passing them to ngx_http_subrequest. This is essentially the same as serving a file request for the given file, in particular in that the name is resolved relative to the root directive currently in effect. So there are still some security considerations, but in general it's much safer than it would be were the SSI parser to simply open and read the file on it's own.
Related
I noticed my install of nginx has three folders called
etc/nginx/sites-available
etc/nginx/sites-enabled
etc/nginx/conf.d
Do I really need these if I just want to work directly in the etc/nginx/nginx.conf file and remove the include lines that include these items in nginx.conf? Are these directories used for anything else that would mess things up if I delete them?
No, they are not needed if you define your server blocks properly in nginx.conf, but it's highly suggested. As you noticed, they are only used because of the include /etc/nginx/sites-enabled/*; in nginx.conf.
For curiosity, is there a reason why you do not want to use them? They are very useful; easier to add new sites, disabling sites, etc. Rather than having one large config file. This is a kind of a best practice of nginx folder layout.
Important information:
You should edit files only in sites-available directory.
Never edit files inside the sites-enabled directory, otherwise you can have problems if your editor runs out of memory or, for any reason, it receives a SIGHUP or SIGTERM.
For example: if you are using nano to edit the file sites-enabled/default and it runs out of memory or, for any reason, it receives a SIGHUP or SIGTERM, then nano will create an emergency file called default.save, inside the sites-enabled directory. So, there will be an extra file inside the sites-enabled directory. That will prevent Apache or NGINX to start. If your site was working, it will not be anymore. You will have a hard time until you find out, in the logs, something related to the default.save file and, then, remove it.
In the example above, if you were editing the file inside the sites-available directory, nothing bad would have happened. The file sites-available/default.save would have been created, but it wouldn't do any harm inside the sites-available directory.
I saw below comment in The Complete NGINX Cookbook in NGINX official site !.
The /etc/nginx/conf.d/ directory contains the default HTTP
server configuration file. Files in this directory ending in .conf
are included in the top-level http block from within the /etc/
nginx/nginx.conf file. It’s best practice to utilize include state‐
ments and organize your configuration in this way to keep your
configuration files concise. In some package repositories, this
folder is named sites-enabled, and configuration files are linked
from a folder named site-available; this convention is deprecated.
It is not a must, but a best practise if you host more than one sites on your box.
It will be easier to manage by keep http context and common directives (such as ssl_dhparam, ssl_ciphers, or even gzip settings, etc.) on the nginx.conf so that it applied across all sites.
Keep site-specific server context (such as ssl-certificate, location directives, etc.) at etc/nginx/sites-available/ and name the configuration file as your-domain.conf. The file in etc/nginx/sites-enabled can be just a link to the file to the etc/nginx/sites-available.
Let me quickly preface this by saying that I'm not a server administrator, so it's quite possible that I may be asking the "wrong" questions here.
I have a situation where we have a domain that will serve static files (HTML, images, etc.) that are configured and built by an existing, separate application. At certain scheduled datetimes, we will need the content of the sites to change to a different set of static files.
Since the files will all be prepared ahead of time, I was wondering if it was possible for nginx to be able to "switch" the root directory to direct traffic to the appropriate place based on these scheduled datetimes.
So if there were a series of directories maybe like this:
/www.example.com-20160701000000/content/public
/www.example.com-20160708000000/content/public
/www.example.com-20160801120000/content/public
And then the configuration would say that from 1 July 2016 00:00:00 through 7 July 2016 23:59:59, the site root for www.example.com would be /www.example.com-20160701000000/content/public, and so forth.
Some other things I've looked into:
Some form of middleware like PHP, but I want to avoid this for portability.
SSI doesn't really seem like an option. It seems like I'd have one root directory and in the index.html, the contents would be something like <!--# include file="www.example.com-$datestamp/content/public/index.html" --> but it seems like I'd have to do this for every page maybe? I'm also not sure how it would work if the page names are different between versions.
A cron job or something else that either moves files or edits a file at the appropriate time, this just seems like a really bad potential failure point.
So tl;dr, can nginx be configured in some way to have root directories that are active for a domain at different scheduled times? Or is there a better approach to this problem that I'm not aware of?
nginx has variables called $time_iso8601 and $time_local which you could use to construct a dynamic root. See this document for details.
One approach would be to construct your rules as a map and set the root directive appropriately using the mapped variable or named captures. See this document for details.
I tested the concept using this:
map $time_iso8601 $root {
default /usr/local/www/test;
~^2016-06-2[0-9] /usr/local/www/test/test-20160625;
}
server {
root $root;
...
}
I have fifteen virtual hosts (servers) with separate log location for it. I am a bit confused about which would be the best option to write nginx config file for each. All server blocks in one file or a separate file for each server?
Which would be a more efficient way?
Nginx reads config once on start (or reload), so do whatever is more appropriate for you.
I would write related server blocks together in one file, and have one bunch of related servers per file.
Or have one file per server.
Or write them all in one file.
Efficiency is not effected by how you define the blocks in nginx. Thus, it would be same in the given case.
If there's some commonality between your virtual hosts configs, such as general SSL settings or denying certain types of requests, you may want to use includes.
I like to keep separate vhosts config files, it's easier to take one domain offline for maintenance for instance.
Suppose the URL http://example.com/test.php. If I type this URL on the browser address bar, the PHP code is executed, and its output is returned to me. Fine. But, what if instead of executing it, I wanted to view it's source as plain text. Is there a a way to issue such request?
I believe that there must be some way, and my concern is that some outsider could retrieve sensitive code, such as configurations file, by guessing it's location. For example, Joomla instalations have a configuration.php on it's root folder. If someone retrieves such file as plain text, then these database credentials have been seriously compromised. Obviously, this could be prevented with proper permissions, but it's just too common to just issue 0777 as everything permissions and forgetting about access denials.
For PHP: if properly configured, there is no way to download it. File permissions won't help either way, as the webserver needs to be able to read the files, and that's the one serving contents. However. a webserver can for instance be configured to serve them with x-httpd-php-source, or the PHP/webserver configuration may be broken. Which is why files which don't need direct access (db config, class definitions, etc.) should be outside the document root, so there is no way those files will get served by accident even when the webserver config is incorrect / failing. If your current hoster does not allow you to store files outside the document root, switch hosting a.s.a.p.
There is a way to issue such request that downloads the source code of http://example.com/test.php if the server is configured to provide a URL to do so. Usually it isn't, so usually there is no way to issue such a request.
I have the same code which will be used for several sites. In the Nginx config I wanted to have all the sites point to the same code folder.
I think this should work. The only catch is that I want each site to use a different config file.
How could something like this be achieved? Surely I wouldn't need to duplicate all the websites code just to have each one have a different config?
What language are you scripting in? Most languages will have a way to examine the incoming request. From this you could extract the domain name from the request and base which conf file you load based on the name using an if or switch statement.
You could also use a get variable for example www.domain.com/index.html?conf=conf1.conf. Then in your controller you'd need to look at that git variable to determine which conf file to load.
Either of these solutions should be easy to find in the docs for you scripting language.