Pass PHP requests on to PHP-FPM using sockets - nginx

I was trying to set up linux-dash on my server running gunicorn with nginx as reverse proxy. I tried setting up the configuration file as suggested here.
Every time I try to open one of the php scripts in the browser, it throws a "404 not found" error. As far as I understand the following block in the configuration file is responsible for it.
location ~ \.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
fastcgi_pass unix:/run/php5-fpm.sock;
#fastcgi_pass localhost:9000; # using TCP/IP stack
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
try_files $uri $uri/ /index.php?$args;
include fastcgi_params;
}
Can someone help me understand what the condition in the if block actually means? And where am I going wrong?
As far as the location directive is concerned, what I understand from it is that it tries to find if a php script needs to be executed and accordingly splits the path to the script and somehow using fastcgi runs that script on the browser. Please correct me if I am wrong and provide a better understanding of what it means.

It is checking for the existence of file. Ensure the path exists. You should try to use try_files instead.
http://wiki.nginx.org/Pitfalls#Check_IF_File_Exists
http://wiki.nginx.org/NginxHttpCoreModule#try_files

Related

Serving remote static files with symfony3

I have a problem with my Nginx configuration. I have 2 servers, one with nginx and one with my webApp in symfony3.
Here is my configuration :
location /portal/mysite/ {
set $frontRoot /srv/data/apps/mysite-portal-stag/current/web;
set $sfApp app.php; # Change to app.php for prod or app_dev.php for dev
root /srv/data/apps/mysite-portal-stag/current/web;
rewrite ^/portal/mysite/(.*)$ /$1 break;
try_files $uri #sfFront;
}
location #sfFront {
root /srv/data/apps/mysite-portal-stag/current/web;
fastcgi_pass myserver:myport;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $frontRoot/$sfApp;
fastcgi_param SCRIPT_NAME /portal/mysite/$sfApp;
}
The webSite work for all the php scripts but all the assets (static files) are broken files. I don't understand enough how Nginx works to indicate what are the static files and "tell" my proxy that they aren't script.
The try_files directive automatically tries to find static files, and serve them as static, prior to giving up, and letting the request be served as a script.
http://nginx.org/r/try_files
Checks the existence of files in the specified order and uses the first found file for request processing; the processing is performed in the current context. The path to a file is constructed from the file parameter according to the root and alias directives. It is possible to check directory’s existence by specifying a slash at the end of a name, e.g. “$uri/”. If none of the files were found, an internal redirect to the uri specified in the last parameter is made.
Note that although you're already using try_files, it appears that perhaps your path handling isn't up to spec.
As for your own answer with a temporary solution, there's nothing wrong with using a rewrite or two, but that said, it looks like you'd benefit from the alias directive.
http://nginx.org/r/alias
Defines a replacement for the specified location.
However, you've never explained why you're serving stuff out of /tmp. Note that /tmp is often automatically cleared by some cron scripts, e.g., on OpenBSD, the /etc/daily script would automatically find and remove files older than about 7 days (on a daily basis, as the name suggests).
In summary, you should first figure out what is the appropriate mapping between the web view of the filesystem and your filesystem.
Subsequently, if a prefix is found, just use a separate location for the assets, together with alias.
Else, figure out the paths for try_files to work as intended.
I have find a very ugly solution until anyone find a better solution, here is what I have done :
I have copied all the assets repository and copied it to my proxy server where nginx is.
Here is my new config :
location /portal/mysite/ {
set $frontRoot /srv/data/apps/mysite-portal-stag/current/web;
set $sfApp app.php;
root /srv/data/apps/mysite-portal-stag/current/web;
rewrite ^/portal/mysite/(.*)$ /$1 break;
try_files $uri #sfFront;
}
location /portal/mysite/asset {
root /tmp/mysite/asset;
rewrite ^/portal/mysite/asset/(.*)$ /$1 break;
}
location #sfFront {
set $frontRootWeb /srv/data/apps/mysite-portal-stag/current/web;
root /srv/data/apps/mysite-portal-stag/current/web;
fastcgi_pass myAdressWeb:myPort;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $frontRoot/$sfApp;
fastcgi_param SCRIPT_NAME /portal/mysite/$sfApp;
}
And now it's working, all the js/css and pictures are found.
If anyone think about a "cleaner" answer, he is more than welcome to answer.

rewrite or internal redirection cycle with NGINX

I have a PHP API which lives off the URL path /api and on OSX the following configuration works fine:
location /api {
try_files $uri /api/index.php$is_args$args;
fastcgi_pass PHP:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
This same location block on Ubuntu, however, seems to result in:
[error] 9#0: *3 rewrite or internal redirection cycle while internally redirecting to "/api/index.php"
This is regardless if I explicitly call http://localhost/api/index.php, use a directory reference http://localhost/api or if I pass some params to the index.php script with a call like: http://localhost/api/actions/recent.
Can anyone help me understand why Ubuntu and OSX might be different? What about getting around this rewrite error?
Full details for OSX and Ubuntu can be found here:
https://gist.github.com/ksnyde/80ac9a64a6cb03927838
Well my initial supposition that OSX and Ubuntu were behaving differently did not prove to the case.
The turning point for me was understanding a bit more about the genesis of this error. While the text of the error doesn't exactly give it away it basically was indicating that it had tried all variations in the patterns listed on the try_files line. In fact many people add a =404 to the of such a line to resolve to a more meaningful error if no matches are found.
Once I realised what the error was actually saying it led me to the realisation that there was a mismatch in the path being passed to FPM and the directory structure that FPM was expecting. Stupid ... But then all hard errors typically have an element of stupidity. :)

Nginx root directive inside location block is not working as I expect

I'm having a big headache while configuring Nginx to work inside a location block.
I'm developing a web application with Laravel, and it is located at /srv/http/zenith. With Laravel, the index is inside the public folder, so I'm trying to reach it using the following configuration:
location /zenith/ {
root /srv/http/zenith/public;
try_files $uri $uri/ /index.php?$query_string;
location ~ \.php$ {
fastcgi_pass unix:/run/php-fpm/php-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
But it gets me 404 error everytime. As I read from Nginx documentation, Nginx does not remove the path from the URI, so even inside /zenith/ block, all URIs still start with /zenith/. This way, example.com/zenith points to /srv/http/zenith/public/zenith when I want /srv/http/zenith/public.
How do I fix this error? I expected that Nginx removed this unwanted part automatically, but it seems to be not this way.
You need to understand the difference between a root and alias. A root maps the URI / to the directory mentioned and expects all URI parts after it to match the on disk tree. Alias maps the location of the block it's part of to the directory mentioned and expects all URI parts after this location to match the on disk tree. Since root inside a location block still maps the / URI, the part after / needs to exist on disk for things to work. In the common case you will use root for the document root and alias for location blocks.

How to enable xdebug with nginx?

My situation is the following:
I have a VM (Ubuntu server 13.04) with PHP 5.4.9-4ubuntu2.2, nginx/1.2.6, php5-fpm and Xdebug v2.2.1.
I'm developing an app using PhpStorm 6.0.3 (which I deploy on the VM).
My problem is, whenever I try to start a debugging session, the IDE never gets a connection request from the webserver (And thus, the session never starts).
I looked through a lot of recommendations about xdebug configuration and found nothing useful.
What I recently realized is that if I set the XDEBUG_SESSION cookie myself through the browser (Thanks FireCookie) I can debug my app... so my guess is there's something keeping the webserver from sending the cookie back to the client.
The thing is, I'm using the same IDE configuration in a different project, which is deployed into a different CentOS based VM (with lighttpd), and it works just fine.
I tried to deploying my current project into such VM (changing the webserver to NginX) and it worked allright (Unfortunately I lost that VM and can't check the config :().
So... here's my NginX config:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
location / {
try_files $uri $uri/ /dispatch.php;
}
#
location ~ \.php$ {
root /var/www/bresson/web;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index dispatch.php;
fastcgi_param SCRIPT_FILENAME /var/www/$fastcgi_script_name;
include fastcgi_params;
#fastcgi_pass 127.0.0.1:9009;
}
}
fpm config (/etc/php5/fpm/pool.d/www.conf):
listen = /var/run/php5-fpm.sock
xdebug.ini:
zend_extension=/usr/lib/php5/20100525/xdebug.so
xdebug.remote_port=9000
xdebug.remote_enable=On
xdebug.remote_connect_back=On
xdebug.remote_log=/var/log/xdebug.log
Any idea will be much appreciated. Thanks!
EDIT:
Another thing I tried was to start a session from php and I saw that the session cookie was created without any problem...
2nd Edit:
I think I found where the problem is: the URI.
I wrote another script in order to try configuration parameters and stuff (A much simpler one), and it worked right out!.
So eventually I figured the problem was that the query parameters (i.e.: XDEBUG_SESSION_START=14845) were not reaching my script.
The problem is my starting URI, which is of the form /images/P/P1/P1010044-242x300.jpg. Through some virtual host configuration I should be able to route it to something of the like /dispatch.php/images/P/P1/P1010044-242x300.jpg, and use the rest of the URI as parameters. So... I haven't found a solution per se, but now I have a viable workaround (pointing my starting URL to /dispatch.php) which will do it for a while. Thanks
Just in case there's someone reading this... I got it!
The problem was nginx's configuration. I had just copied a template from somewhere, but now I read a little more and found out that my particular config was much simpler:
location / {
root /var/www/bresson/web/;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/dispatch.php;
fastcgi_pass unix:/var/run/php5-fpm.sock;
}
In my case, every request has to be forwarded to my front-controller (which then analyzes the URI), so it was really simple.

*Actually* getting nginx to reload config

I'm trying to set up a subdomain with nginx on Ubuntu 13.04. I was actually able to do this before with:
server {
root /home/ajcrites/projects/site-a/public/;
server_name sub.localhost;
location / {
try_files $uri /index.php;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
#fastcgi_pass 127.0.0.1:9000;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
Then http://sub.localhost would work perfectly.
Now, I've copied the same config and just changed root (to a path that exists) and server_name to some other value like subber.localhost. No matter what I do, I can't get subber.localhost to work. Also no matter what I do, I can't get sub.localhost to not work while nginx is running. What's weird is that nginx actually will complain about errors in the config files, but even if there are none it doesn't matter. Only sub.localhost will work.
I've tried doing all of the following, repeatedly and with various combinations:
Create a separate file in sites-available. nginx seems to thing that it's there since it will complain about errors when I try to reload.
service nginx stop && service nginx start
nginx stop; nginx
nginx -s reload
When I stop it, pgrep does not show any nginx process, so it seems like it is correctly getting stopped. This does not appear to be a cache issue as a fresh browser instance reacts the same way.
I happen to have had similar problems, each time that was a simple issue (that took me a long time to find though). For instance (sorry if some points are obvious - but sometimes we focus on the difficult, ignoring the obvious - at least that's my case)
ensure the symlink is present in sites-enabled
root path has read access all the way from / (at at least x) for the nginx user
is subber.localhost well defined in /etc/hosts for local tests (or local DNS),
Maybe you could try to force the IP on which listen is listening. Eg instead of
listen 80
you could try
listen 127.0.0.1:80; # OR
listen 192.168.0.1:80; # you local address (or remote if that's the case)

Resources