*Actually* getting nginx to reload config - nginx

I'm trying to set up a subdomain with nginx on Ubuntu 13.04. I was actually able to do this before with:
server {
root /home/ajcrites/projects/site-a/public/;
server_name sub.localhost;
location / {
try_files $uri /index.php;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
#fastcgi_pass 127.0.0.1:9000;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
Then http://sub.localhost would work perfectly.
Now, I've copied the same config and just changed root (to a path that exists) and server_name to some other value like subber.localhost. No matter what I do, I can't get subber.localhost to work. Also no matter what I do, I can't get sub.localhost to not work while nginx is running. What's weird is that nginx actually will complain about errors in the config files, but even if there are none it doesn't matter. Only sub.localhost will work.
I've tried doing all of the following, repeatedly and with various combinations:
Create a separate file in sites-available. nginx seems to thing that it's there since it will complain about errors when I try to reload.
service nginx stop && service nginx start
nginx stop; nginx
nginx -s reload
When I stop it, pgrep does not show any nginx process, so it seems like it is correctly getting stopped. This does not appear to be a cache issue as a fresh browser instance reacts the same way.

I happen to have had similar problems, each time that was a simple issue (that took me a long time to find though). For instance (sorry if some points are obvious - but sometimes we focus on the difficult, ignoring the obvious - at least that's my case)
ensure the symlink is present in sites-enabled
root path has read access all the way from / (at at least x) for the nginx user
is subber.localhost well defined in /etc/hosts for local tests (or local DNS),
Maybe you could try to force the IP on which listen is listening. Eg instead of
listen 80
you could try
listen 127.0.0.1:80; # OR
listen 192.168.0.1:80; # you local address (or remote if that's the case)

Related

Securing phpMyAdmin by whitelisting IPs and changing alias

I’m trying to figure out the best way of securing access to my MariaDB database. I have a root non-wordpress site with 2 wordpress sites as directories (/blog and /shop) - each with separate databases - that use phpMyAdmin as a database viewer (accessible at /phpmyadmin). I want to increase the security so that it can’t be hacked so easily. However, I can’t seem to implement any of the recommended security measures.
Creating a .htaccess and in /usr/share/phpmyadmin and adding the following to whitelist IPs and block all other IPs has no effect:
Order Deny,Allow
Deny from All
Allow from 12.34.56.78
Changing the phpMyAdmin url via the config file (so it’s not accessible at /phpmyadmin) also seems to have no effect.
I’m assuming that it’s because apache is not running (I use Nginx to run my main domain and the 2 wordpress sites). I can’t run apache and Nginx simultaneously (presumably because they’re both fighting for port 80), but what I don’t get is that when Nginx is running and apache is supposedly not running, how is the /phpmyadmin link still accessible?
Here’s my .conf file in /etc/nginx/sites-available (also symlinked to sites-enabled):
upstream wp-php-handler-four {
server unix:/var/run/php/php7.4-fpm.sock;
}
server {
listen 1234 default_server;
listen [::]:1234 default_server;
root /var/www/site;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html index.php;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location /blog {
try_files $uri $uri/ /blog/index.php?$args;
}
location /shop {
try_files $uri $uri/ /shop/index.php?$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass wp-php-handler-four;
}
}
I followed a tutorial to set this up (maybe I’m misunderstanding how it’s fully set up) but is this not actually using apache to access /phpmyadmin or is it using some web socket? How can I make the above security attempts work?
Note: the /usr/share/phpmyadmin/ dir is symlinked to /var/www/site/
Creating a .htaccess in /usr/share/phpmyadmin and adding the following to whitelist IPs and block all other IPs has no effect:
Order Deny,Allow
Deny from All
Allow from 12.34.56.78
Of course it won't have any effect since this file processed only by apache.
I can’t run apache and Nginx simultaneously (presumably because they’re both fighting for port 80)
In an early days of nginx there was a technique to use nginx for static files and apache to process PHP scripts. Apache was running on some other port (for example, 8080) and listening only on local IP (127.0.0.1). Nginx configuration for that was looking like
upstream apache {
server 127.0.0.1:8080;
}
server {
...
location ~ \.php$ {
proxy_pass http://apache;
}
}
Nowadays it is rarely used since using PHP-FPM is more flexible and gives a less server overhead. However it can be used when you have a complex .htaccess configuration and don't want to rewrite it for nginx/PHP-FPM.
but what I don’t get is that when Nginx is running and apache is supposedly not running, how is the /phpmyadmin link still accessible?
...
Is this not actually using apache to access /phpmyadmin or is it using some web socket?
This configuration uses UNIX socket /var/run/php/php7.4-fpm.sock where PHP-FPM daemon is listening for requests (you can read an introduction to this article to get some additional details).
How can I make the above security attempts work?
One of many possible solutions is
Unlink /usr/share/phpmyadmin/ from /var/www/site/
Use the following location block (put it before the location ~ \.php$ { ... } one:
location ~ ^/phpmyadmin(?<subpath>/.*)? {
allow 12.34.56.78;
# add other IPs here
deny all;
alias /usr/share/phpmyadmin/;
index index.php;
try_files $subpath $subpath/ =404;
location ~ \.php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$subpath;
fastcgi_pass wp-php-handler-four;
}
}
To add to the otherwise quite thorough answer:
Since Nginx doesn't use .htaccess files or the same syntax as Apache, you aren't being restricted as Apache would do. You may wish to find some other solution, or you could use what's built in to phpMyAdmin: there is a allow/deny functionality built in that you can learn about in the documentation: https://docs.phpmyadmin.net/en/latest/config.html#cfg_Servers_AllowDeny_order (and https://docs.phpmyadmin.net/en/latest/config.html#cfg_Servers_AllowDeny_rules); this will let you restrict access based on username and IP address.

Nginx location: 403 error / File not found

I set up my domain on my server using nginx. So far so good my homepage works. But now I wanna add some locations for later test of programming. My plan is to call diffrent projects like mydomain.com/php/myprogramm.php
So I add some folder in /var/www/mydomain.com/php (my side index is in /var/www/mydomain.com/html)
Entering www.mydomain.com/php/ leads to an 403 error and mydomain.com/php/myprogramm.php says File not found...
this is my nginx file:
server {
listen 80 default_server;
#listen [::]:80 default_server ipv6only=on;
# Make site accessible from http://localhost/
server_name mydomain.com www.mydomain.com;
location / {
root /var/www/mydomain.com/html;
index index.html index.htm;
}
location /php/ {
root /var/www/mydomain.com;
}
location /js/ {
root /var/www/mydomain.com;
}
location /node/ {
root /var/www/mydomain.com;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
#
# # With php5-cgi alone:
# fastcgi_pass 127.0.0.1:9000;
# # With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
Of course when I set up my domain I also set sudo chown -R www-data:www-data /var/www/mydomain.com/html and sudo chmod 755 /var/www
Some ideas someone? :/
Problems analysis
The first golden rule is:
nginx always serves request from a single location only. (Re-)read http://nginx.org/en/docs/http/request_processing.html.
Based on your configuration:
Requests to (www.)mydomain.com/php/<whatever> for files not ending with .php will be served by location /php/ from /var/www/mydomain.com/php/<whatever>
Requests to (www.)mydomain.com/<whatever>.php will be served by location ~\.php$ from <default root ('html' by default)>/<whatever>.php
The first problem here is that you are not serving .php files from where you think you are. Learn from location documentation how the location block serving a request is chosen.
You will note that the 'File not found' error was not an nginx error, but a message generated by PHP. That helps to know whether the problem comes from (frontend or backend).
Now about that 403: it seems nginx has trouble accessing the location where it is supposed to serve content from. Check /var/www/mydomain.com/php/ (directory + contents) rights.
Proposed pieces of advice
Your configuration looks suboptimal.
If you use the same root in lots of location blocks, why not moving it one level upper so it becomes the default (which yo ucan override in specific locations where needed)?
You can used nested locations, ie to solve you PHP files serving problem. Note that it is always a good idea to enclose regex locations inside prefix locations (What is the difference? Read location documentation). The reason is regex locations are order-sensitive, which is bad for maintenance. Prefix locations are not since only the longest match with a request URI will be chosen.
Here is a propsed updated version of part of your configuration:
root /var/www/mydomain.com;
location / {
root /var/www/mydomain.com/html;
index index.html index.htm;
}
location /php/ {
location ~ \.php$ {
# Useless without use of $fastcgi_script_name and $fastcgi_path_info
# Moreover, requests ending up here always end with .php...
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
# You seem to have copy-pasted this section without understanding it.
# Good understanding of what happens here is mandatory for security.
}
}
I suggest you read the documentation about fastcgi_split_path_info, $fastcgi_script_name and $fastcgi_path_info.
For my testing right now I fixed the issue quite simply.
I forogt to check my php.ini and change the cgi.fix_pathinfo to 0
Also I changed the group for my folders (still had root inside) to www-data.
At the end I updated my configuration: I set root /var/www/mydomain.com; in my server block (server{})
That's all I did.
But I will keep your advice in mind for later issues.
Thanks for your help I appreciate it.

Pass PHP requests on to PHP-FPM using sockets

I was trying to set up linux-dash on my server running gunicorn with nginx as reverse proxy. I tried setting up the configuration file as suggested here.
Every time I try to open one of the php scripts in the browser, it throws a "404 not found" error. As far as I understand the following block in the configuration file is responsible for it.
location ~ \.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
fastcgi_pass unix:/run/php5-fpm.sock;
#fastcgi_pass localhost:9000; # using TCP/IP stack
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
try_files $uri $uri/ /index.php?$args;
include fastcgi_params;
}
Can someone help me understand what the condition in the if block actually means? And where am I going wrong?
As far as the location directive is concerned, what I understand from it is that it tries to find if a php script needs to be executed and accordingly splits the path to the script and somehow using fastcgi runs that script on the browser. Please correct me if I am wrong and provide a better understanding of what it means.
It is checking for the existence of file. Ensure the path exists. You should try to use try_files instead.
http://wiki.nginx.org/Pitfalls#Check_IF_File_Exists
http://wiki.nginx.org/NginxHttpCoreModule#try_files

How to enable xdebug with nginx?

My situation is the following:
I have a VM (Ubuntu server 13.04) with PHP 5.4.9-4ubuntu2.2, nginx/1.2.6, php5-fpm and Xdebug v2.2.1.
I'm developing an app using PhpStorm 6.0.3 (which I deploy on the VM).
My problem is, whenever I try to start a debugging session, the IDE never gets a connection request from the webserver (And thus, the session never starts).
I looked through a lot of recommendations about xdebug configuration and found nothing useful.
What I recently realized is that if I set the XDEBUG_SESSION cookie myself through the browser (Thanks FireCookie) I can debug my app... so my guess is there's something keeping the webserver from sending the cookie back to the client.
The thing is, I'm using the same IDE configuration in a different project, which is deployed into a different CentOS based VM (with lighttpd), and it works just fine.
I tried to deploying my current project into such VM (changing the webserver to NginX) and it worked allright (Unfortunately I lost that VM and can't check the config :().
So... here's my NginX config:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
location / {
try_files $uri $uri/ /dispatch.php;
}
#
location ~ \.php$ {
root /var/www/bresson/web;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index dispatch.php;
fastcgi_param SCRIPT_FILENAME /var/www/$fastcgi_script_name;
include fastcgi_params;
#fastcgi_pass 127.0.0.1:9009;
}
}
fpm config (/etc/php5/fpm/pool.d/www.conf):
listen = /var/run/php5-fpm.sock
xdebug.ini:
zend_extension=/usr/lib/php5/20100525/xdebug.so
xdebug.remote_port=9000
xdebug.remote_enable=On
xdebug.remote_connect_back=On
xdebug.remote_log=/var/log/xdebug.log
Any idea will be much appreciated. Thanks!
EDIT:
Another thing I tried was to start a session from php and I saw that the session cookie was created without any problem...
2nd Edit:
I think I found where the problem is: the URI.
I wrote another script in order to try configuration parameters and stuff (A much simpler one), and it worked right out!.
So eventually I figured the problem was that the query parameters (i.e.: XDEBUG_SESSION_START=14845) were not reaching my script.
The problem is my starting URI, which is of the form /images/P/P1/P1010044-242x300.jpg. Through some virtual host configuration I should be able to route it to something of the like /dispatch.php/images/P/P1/P1010044-242x300.jpg, and use the rest of the URI as parameters. So... I haven't found a solution per se, but now I have a viable workaround (pointing my starting URL to /dispatch.php) which will do it for a while. Thanks
Just in case there's someone reading this... I got it!
The problem was nginx's configuration. I had just copied a template from somewhere, but now I read a little more and found out that my particular config was much simpler:
location / {
root /var/www/bresson/web/;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/dispatch.php;
fastcgi_pass unix:/var/run/php5-fpm.sock;
}
In my case, every request has to be forwarded to my front-controller (which then analyzes the URI), so it was really simple.

Invalid update of symlink static files with nginx

I have got a Symfony2.2.1 project which run with a nginx/1.2.6 (Ubuntu 13.04 VirtualBox).
The render of assets are ok with hard link.
With symlink, it works only on the first initialisation.
When I update a symlink source, the browser render transform my modifications with ����� characters. There is no errors from the browser and the part without modifications is not impacted.
Example of the end of my CSS file after modification:
[...]
div.form-actions {
text-align: center;
}
�����
Currently, I use hard link. I had not this problem with Apache2... :/
Have you got an idea?
Thanks
Nginx site conf:
server {
listen 80;
root /media/sf_NetBeansProjects/XXXX/web;
index app.php;
server_name XXXX.lo;
location / {
# try to serve file directly, fallback to rewrite
try_files $uri #rewriteapp;
}
location #rewriteapp {
# rewrite all to app.php
rewrite ^(.*)$ /app.php/$1 last;
}
location ~ ^/(app|app_dev)\.php(/|$) {
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
error_log /media/sf_NetBeansProjects/XXXX/app/logs/nginx_errors.log;
access_log /media/sf_NetBeansProjects/XXXX/app/logs/nginx_access.log;
}
The subtlety is that the media/sf_NetBeansProjects is a VirtualBox share folder with my Windows8 but as I say previously, apache2 was always ok with that.
Try to restart php5-fpm, after create symlink.
sudo service php5-fpm reload
And check disable_symlinks option http://nginx.org/en/docs/http/ngx_http_core_module.html#disable_symlinks
This article helped:
https://coderwall.com/p/ztskha
"Simply spoken, sendfile() uses kernel calls to copy files directly from disc to tcp. If you are using remote filesystems (like nfs or the VirtualBox Guest Additions stuff), this method isn't reliable."
Essentially, turn off sendfile for NGINX if you are trying to serve files on your guest VM that exist on your host.
"To turn off sendfile() in Apache, you can use the EnableSendfile off directive, for nginx use sendfile off."
Ok well there's one thing that comes up my mind, maybe you're viewing the binary data of the image file, so maybe the browser isn't identifying this as an image file, maybe cause nginx isn't sending the content-type, could be for another reason. but I have one suggestion, add this in your default location /
location / {
try_files ..... ;
types {
image/jpeg jpg jpeg;
}
}
alternatively, you can include mime.types inside the server block
server {
#bla bla bla
include mime.types;
location / {
#bla bla
}
}
I'm not sure if this will work or not, but it's worth a try.
Try clearing your browser cache sometimes nginx throw the the file as raw as in with no mime-type set.
Also try changing the HttpHeaders set the expiration and cache-control of per file to minimum, it depends if your project is still in development. So that the file that is being push by the server is always updated and is not being cache by the browser.
I had the same problem, using the same setup.
You need to disable sendfile from Nginx in order to properly send this static files under symbolic links.
location / {
sendfile off; # Do it before try files
# try to serve file directly, fallback to rewrite
try_files $uri #rewriteapp;
}

Resources