I'd like to configure Nginx in such way that I need minimal amount of efforts to add new sites.
I see this in the following way: when creating new site I put it in subfolder under /var/www, add new location in nginx config file which just includes config template for required site type. It can look like this:
server
{
listen 80;
server_name localhost;
root /var/www;
location /site1
{
include drupal.conf;
}
location /site2
{
include wordpress.conf;
}
}
But, unfortunately, this doesn't work in my case. The issue is with nested locations. I have the following lines in one of included templates:
...
location /core/
{
deny all;
}
location /
{
try_files $uri $uri/ #rewrite;
}
....
Nginx gives me the following errors:
location "/core/" is outside location "/site1" in ...
location "/" is outside location "/site1" in ...
So I need to specify full path for each site (like /site1/core/), but then I will not be able to extract it as one reusable piece.
Previously, as alternative, I configured multiple server directives with different server_name (site1.locahost, site2.localhost ...) and edited /etc/hosts file. In this case I didn't need nested locations as long as everything was under the root of domain. But, as I said, I'm looking for a way to simplify the workflow as much as possible and editing /etc/hosts seems to me like extra action which I' like to avoid.
So the question is how to best handle this situations? How do you organize work on different sites locally?
At home and at work I use a combination of Bind9 and Nginx to solve this problem. It does require some setup, but once it's up you shouldn't ever need to touch your nginx config file again. I've added some limitations at the bottom.
Setup
Setup a DNS Server, (Bind9, dnsmasq)
1) Setup a local DNS server, and create a host called DEV
2) Create a A name entry in DEV for
* A 127.0.0.1
And restart your DNS server.
3) Make sure you can dig *.dev and verify that you get back 127.0.0.1.
Setup Nginx
1) In your nginx.conf or wherever you're storing your conf.d stuff, create a vHost entry that looks more or less like this: you can adapt it to your needs.
server {
listen 80;
server_name *.dev;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
if ($host ~* ^(.*).dev$) {
set $site $1;
}
if (!-d /var/www/$site/) {
return 404;
}
location ~ index.php$ {
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_pass backend;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/$site/$fastcgi_script_name;
include fastcgi_params;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_intercept_errors on;
fastcgi_ignore_client_abort off;
fastcgi_connect_timeout 60;
fastcgi_send_timeout 180;
fastcgi_read_timeout 180;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
}
location ~ / {
try_files $uri $uri/ /index.php?$args;
}
}
2) Restart nginx service.
3) Profit
Once this is setup, to create a new site all you have to do is create a new folder in /var/www/.
mkdir -p /var/www/sitename/
That site, and the PHP underneath it can be accessed via sitename.dev.
As stated earlier, there are a couple limitations with this. sitename has to be all lower case, and contain no spaces or special characters (periods included). Secondly, it really only works for sites that are bootstrapped through index.php.
If you have radically different site structures, you can modify a couple of things to give you a more robust setup. For example you could write your config out so that it looked something like this.
server {
listen 80;
server_name *.*.dev;
[...]
if ($host ~* ^(.*).(.*).dev$) {
set $site $1;
set $folder $2;
}
if (!-d /var/www/$folder/$site/) {
return 404;
}
[...]
fastcgi_param SCRIPT_FILENAME /var/www/$folder/$site/$fastcgi_script_name;
[...]
}
And assuming you update your dns server to response to ..dev, then you could write our directories as follows, just to give you an idea.
/var/www/wordpress/site1
/var/www/wordpress/site2
/var/www/wordpress/site3
/var/www/zend/site1
/var/www/zend/site2
/var/www/zend/site3
Like I said earlier, I use this setup at home and at work with +15 people. Our work setup is a little more complex (shared server, everyone has their own home folder), but it works just fine there. Personally I prefer working on subdomains rather than localhost paths.
Hope this helps!
What about using something like the alias function offered by nginx?
http://wiki.nginx.org/HttpCoreModule#alias
If that doesn't work for your workflow, does symlinking to the /core directory prevent that error?
Related
The environment is as follows:
I have https://website.com and a blog at https://website.com/blog
The root path points to a Passenger-hosted Rails app, and the blog subdirectory points to a WordPress app via php-fpm
Everything works fine with my Nginx config, but when I try to change the permalink structure to anything other than "Plain", I get a 404 page from the Rails app as if the location blocks aren't utilized. I tried looking at the error log in debug mode, and I do see it attempting to try_files, but ultimately it fails with the Rails 404 page.
It may be worth noting that the entire site is behind Cloudflare. Not sure if it could be something with that, though I kind of doubt it.
Here is the almost-working Nginx config I'm using:
server {
listen 80 default_server;
server_name IP_ADDRESS;
passenger_enabled on;
passenger_app_env production;
passenger_ruby /home/ubuntu/.rbenv/shims/ruby;
root /web/rails/public;
client_max_body_size 20M;
location ^~ /blog {
passenger_enabled off;
alias /web/blog;
index index.php index.htm index.html;
# Tried the commented line below, but then nothing works.
# try_files $uri $uri/ /blog/index.php?$args;
# The line below works, but peramlinks don't.
try_files $uri $uri/ /blog/index.php?q=$uri&$args;
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass unix:/run/php/php7.3-fpm.sock;
# Tried the commented line below, but then nothing works
# fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# The line below works, but peramlinks don't.
fastcgi_param SCRIPT_FILENAME $request_filename;
}
}
}
I wanted to comment in short but I don't have enough reputation for that.
I used the following block and worked for me. I added an add_header directive just to debug that if my request is reaching the correct block.
location ^~ /blog {
try_files $uri $uri/ /index.php?$args;
add_header reached blog;
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass php;
}
}
If your server is behind CloudFlare, you can try with /etc/hosts entry on your local machine if you're using Ubuntu/Mac. Which will stop the DNS lookup and site will directly be accessed from the IP address.
Check if any redirects are happening due to any other Nginx configuration.
Also, you have mentioned in the question that site is https:// while your server block has only listen 80 meaning non HTTPS.
Check for the response headers with
curl -XGET -IL site-name.tld
which may help you more debugging the situation.
Difference between alias and root directives https://stackoverflow.com/a/10647080/12257950
I'm trying to install Wordpress on a Ubuntu 18.04 on a subdomain. I set the Nginx files on sites-available, but I get a 502 error on browser because Wordpress is using a .php file type for the index, so I added "index.php" on the list in sites-available. Well after adding "index.php" on the list when I try to access the URL in browser it downloads a file named with the subdomain address.
Here's my code in sites-available
server {
listen 80;
listen [::]:80;
root /var/www/apt;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html index.php;
server_name apt.forrum.ro;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
}
Please let me know how to fix it.
This is simplified, basically Nginx uses the try_files directive to serve the file to user in the folder. This is why your php file is being sent to the user, it's then downloaded rather than shown as browsers don't really know how to show PHP to the user.
What you need to do is tell Nginx to run the file. In the case of PHP you can use FastCGI. There are many guides to doing this on ubuntu such as This One.
Once you have it installed, all the directives for FastCGI are described by Nginx themselves Here.
Their example is posted here:
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
# Mitigate https://httpoxy.org/ vulnerabilities
fastcgi_param HTTP_PROXY "";
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
# include the fastcgi_param setting
include fastcgi_params;
# SCRIPT_FILENAME parameter is used for PHP FPM determining
# the script name. If it is not set in fastcgi_params file,
# i.e. /etc/nginx/fastcgi_params or in the parent contexts,
# please comment off following line:
# fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
I'm trying to set up a Wordpress in a system that has another php application installed, using nginx as web server.
I've simplified my config file to the maximun. The following confi is serving one post of my blog:
server {
listen 80;
server_name blog.ct.com;
root /home/ff/www/blog/;
index index.php index.html;
location / {
try_files $uri $uri/ /index.php?$uri&$args =405;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_buffer_size 128k;
fastcgi_buffers 64 32k;
fastcgi_busy_buffers_size 128k;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param APPLICATION_ENV development;
fastcgi_param HTTP_X_FORWARDED_PROTO https;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
}
}
But, due my system's requirements I need to serve the blog from within a sub path (In my final system http://blog.ct.com/ should be serving my custom php app and http://blog.ct.com/vendor should be serving the wordpress blog).
The local root directory from wordpress must be /home/ff/www/blog/ (this cannot be changed, while my custom app's directory is /home/ff/www/myapp/). So I think I need to reserve location / for my custom app, I have to create a location /vendor
If I add /vendor and I return 403 in / (just to debug easier), the browser says 405 (notice the =405 in /vendor, also to debug easier):
location /vendor {
try_files $uri $uri/ /index.php?$uri&$args =405;
}
location / {
return 403;
}
So I think nginx is going into location /vendor but is not finding my php script in /home/ff/www/blog/index.php so its returning the fallback 405.
Any idea why this could happen?
How can I achieve to load http://blog.ct.com/vendor as the root from wordpress but keeping http://blog.ct.com/ using another php script?
I've found out the following hints that gave me the clue to fix the problem (in case someone has the same problem than me, this may help)
Using location /path is not the same as using location ~(/path) (regex have different priority, so maybe they are not being checked in the order you think)
Adding error_log /your/path/log/error.log debug; to any location block may help you to see how is nginx serving every request (e.g. to location fastcgi, location \vendor, or the server{ block).
alias /var/www/path/vendor works different than root /var/www/path/vendor (check Nginx -- static file serving confusion with root & alias);
In case of the root directive, full path is appended to the root including the location part, whereas in case of the alias directive, only the portion of the path NOT including the location part is appended to the alias.
using rewrite with alias can help you parse the php file you want independent of the path
if (!-f $request_filename) {
rewrite ^ $document_root/index-wp.php last;
}
Take care of the SCRIPT_FILENAME you are using (check it with error_log, see above), maybe you need fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; but you are loading fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; so depending on your previous config you may be attaching the document root twice.
Two different configurations for fastcgi can be used if you change your index.php file names. E.g. location ~ wp\.php$ { will work with wp.php while location ~ \.php$ { will work with all other php files like index.php.
I have a local NginX testing server on my Windows 10 machine. This is just for creating and testing websites, it is not served to the internet.
I've been testing one site successfully at localhost for a while, but now I want to add a second test site. I thought I could achieve this by duplicating the server{} block in the nginx.conf file and changing the name of the server_name and a few other parameters, but that it doesn't seem to work. When I try to load my second test site in Chrome, I get this error:
This site can’t be reached
local_test_2’s server DNS address could not be found.
My site at localhost still works, though.
Why is my second test site not working?
Here's my current nginx.conf file:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type text/html;
sendfile on;
keepalive_timeout 65;
server {
#Server basics
server_name localhost;
listen 80;
index index.html index.php;
root c:/nginx/html;
location / {
try_files $uri $uri/ /index.php?_url=$uri&$query_string;
}
location ~ .(php|htm|html)$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME c:/nginx/html/$fastcgi_script_name;
include fastcgi_params;
}
}
server {
#Server basics
server_name local_test_2;
listen 80;
index index.html index.php;
root "C:\Users\User Name\Documents\Test\example.com";
location / {
try_files $uri $uri/ /index.php?_url=$uri&$query_string;
}
location ~ .(php|htm|html)$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME c:/nginx/html/$fastcgi_script_name;
include fastcgi_params;
}
}
}
Update:
My C:\Windows\System32\drivers\etc\hosts file has the following:
# localhost name resolution is handled within DNS itself.
# 127.0.0.1 localhost
# ::1 localhost
The current 'localhost' specification is commented out. Should I change this file?
You need to add local_test_2 in your windows host file: at
C:\Windows\System32\drivers\etc\hosts
In host file add below line at the last
127.0.0.1 local_test_2
Also you can check reference to setup new host in nginx at: Setting up Nginx on local machine
The local_test_2 is a url that you created for testing purpose. Since you didn't buy it from some registrar, no DNS provider will be able to resolve the url to the ip address.
Every operating system has a hosts file(in linux it will be /etc/hosts) which can be used to map the urls to ip address without the use of some online DNS service. So in your case you can append the following line,
127.0.0.1 local_test_2
which tells to route all requests to local_test_2 to the same machine(127.0.0.1). No other changes are required in the hosts file.
Refer this link for more details on hosts files and different files used in different operating systems.
On a Zend Framework 2 based website (test environment on nginx and live environment on Apache) there is a category "courses" and its pages have URIs like this:
domain.tld/courses/123-Name of course that can contain ®, €, (, ), and other special chars
The courses names come from the database and are URL-encoded for the internal links:
domain.tld/courses/123-Name%20of%20course%20that%20can%20contain%20%C2%AE%2C%20%E2%82%AC%2C%20%C3%A4%2C%20(%2C%20)%2C%20and%20other%20special%20chars
It's working fine, but when I try to access a page using a special character without encoding a 404-error occures.
An example of website, that uses spacial characters is Wikipedia. You can use
http://en.wikipedia.org/wiki/Signal_(electrical_engineering)
or
http://en.wikipedia.org/wiki/Signal_%28electrical_engineering%29
and are always get the page you want.
Does someone know, how to achieve such behavior ("à la Wikipedia")? (Maybe with HTTP redirecting with a .htaccess rule?)
UPDATE:
/etc/nginx/ax-common-vhost
server {
listen 80;
server_name
foo.loc
bar.loc
baz.loc
;
if ($host ~ ^(?<project>.+)\.(?<area>.+)\.loc$) {
set $folder "$area/$project";
}
access_log /var/log/nginx/$area/$project.access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_min_length 1000;
gzip_types text/plain text/xml application/xml;
client_max_body_size 25m;
root /var/www/$folder/public/;
try_files $uri $uri/ /index.php?$args;
index index.html index.php;
location / {
index index.html index.php;
sendfile off;
}
location ~ (\.inc\.php|\.tpl|\.sql|\.tpl\.php|\.db)$ {
deny all;
}
location ~ \.htaccess {
deny all;
}
if (!-e $request_filename) {
rewrite ^.*$ /index.php last;
}
location ~ \.php$ {
fastcgi_cache off;
#fastcgi_pass 127.0.0.1:9001;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_read_timeout 6000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param APPLICATION_ENV development;
fastcgi_param HTTPS $https;
}
}
You can achieve the intended URL rewrite behavior by having the correct rewrite rules inside of your .htaccess file.
I suggest you have a look at the rewriteflags, particularly the B flag
You should show us your nginx fast_cgi configuration.
They're several way to set the PATH_INFO for PHP, and this is the string containing the path that ZF will have to manage.
One way is:
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param PATH_INFO $fastcgi_path_info;
From this post it seems you could also use this way (named captures) to avoid all urlencoding of the PATH_INFO content:
location ~ ^(?<SCRIPT_FILENAME>.+\.php)(?<PATH_INFO>.+)$ {
(...)
fastcgi_param PATH_INFO $PATH_INFO;
So at least you would detect if the problem comes from having too much or not enough urlencoding.
By avoiding urlencoding from the webserver (and by doing the same with apache) you could manage urldecoding of the path in the PHP side. As this time you know it would never be urldecoded, and that you would have to to do it in php -- or maybe you would have to urlencode it -- weel you would have to manage the fact that path may come in both versions.
This would, maybe, be a nice job for a Zend Framework Router. One of the job of the router is to avoid things like .htaccess rewrite rules in apache and manages url in the application, on a stable and webserver-independant way.
First step will be to test the path string and detect if url encoding needs to be done or not.
Of course if you send url with a mix of url-encoded and url-decoded characters in the same string things will get a lot more harder, as you will not be able to decide (but it would be the same for a webserver). And in your example you used parenthesis that were not urlencoded in generated encoded url but encoded in wikipedia example, your application will have to choose a policy for the rfc protected characters.