How to enable xdebug with nginx? - nginx

My situation is the following:
I have a VM (Ubuntu server 13.04) with PHP 5.4.9-4ubuntu2.2, nginx/1.2.6, php5-fpm and Xdebug v2.2.1.
I'm developing an app using PhpStorm 6.0.3 (which I deploy on the VM).
My problem is, whenever I try to start a debugging session, the IDE never gets a connection request from the webserver (And thus, the session never starts).
I looked through a lot of recommendations about xdebug configuration and found nothing useful.
What I recently realized is that if I set the XDEBUG_SESSION cookie myself through the browser (Thanks FireCookie) I can debug my app... so my guess is there's something keeping the webserver from sending the cookie back to the client.
The thing is, I'm using the same IDE configuration in a different project, which is deployed into a different CentOS based VM (with lighttpd), and it works just fine.
I tried to deploying my current project into such VM (changing the webserver to NginX) and it worked allright (Unfortunately I lost that VM and can't check the config :().
So... here's my NginX config:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
location / {
try_files $uri $uri/ /dispatch.php;
}
#
location ~ \.php$ {
root /var/www/bresson/web;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index dispatch.php;
fastcgi_param SCRIPT_FILENAME /var/www/$fastcgi_script_name;
include fastcgi_params;
#fastcgi_pass 127.0.0.1:9009;
}
}
fpm config (/etc/php5/fpm/pool.d/www.conf):
listen = /var/run/php5-fpm.sock
xdebug.ini:
zend_extension=/usr/lib/php5/20100525/xdebug.so
xdebug.remote_port=9000
xdebug.remote_enable=On
xdebug.remote_connect_back=On
xdebug.remote_log=/var/log/xdebug.log
Any idea will be much appreciated. Thanks!
EDIT:
Another thing I tried was to start a session from php and I saw that the session cookie was created without any problem...
2nd Edit:
I think I found where the problem is: the URI.
I wrote another script in order to try configuration parameters and stuff (A much simpler one), and it worked right out!.
So eventually I figured the problem was that the query parameters (i.e.: XDEBUG_SESSION_START=14845) were not reaching my script.
The problem is my starting URI, which is of the form /images/P/P1/P1010044-242x300.jpg. Through some virtual host configuration I should be able to route it to something of the like /dispatch.php/images/P/P1/P1010044-242x300.jpg, and use the rest of the URI as parameters. So... I haven't found a solution per se, but now I have a viable workaround (pointing my starting URL to /dispatch.php) which will do it for a while. Thanks

Just in case there's someone reading this... I got it!
The problem was nginx's configuration. I had just copied a template from somewhere, but now I read a little more and found out that my particular config was much simpler:
location / {
root /var/www/bresson/web/;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/dispatch.php;
fastcgi_pass unix:/var/run/php5-fpm.sock;
}
In my case, every request has to be forwarded to my front-controller (which then analyzes the URI), so it was really simple.

Related

rewrite or internal redirection cycle with NGINX

I have a PHP API which lives off the URL path /api and on OSX the following configuration works fine:
location /api {
try_files $uri /api/index.php$is_args$args;
fastcgi_pass PHP:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
This same location block on Ubuntu, however, seems to result in:
[error] 9#0: *3 rewrite or internal redirection cycle while internally redirecting to "/api/index.php"
This is regardless if I explicitly call http://localhost/api/index.php, use a directory reference http://localhost/api or if I pass some params to the index.php script with a call like: http://localhost/api/actions/recent.
Can anyone help me understand why Ubuntu and OSX might be different? What about getting around this rewrite error?
Full details for OSX and Ubuntu can be found here:
https://gist.github.com/ksnyde/80ac9a64a6cb03927838
Well my initial supposition that OSX and Ubuntu were behaving differently did not prove to the case.
The turning point for me was understanding a bit more about the genesis of this error. While the text of the error doesn't exactly give it away it basically was indicating that it had tried all variations in the patterns listed on the try_files line. In fact many people add a =404 to the of such a line to resolve to a more meaningful error if no matches are found.
Once I realised what the error was actually saying it led me to the realisation that there was a mismatch in the path being passed to FPM and the directory structure that FPM was expecting. Stupid ... But then all hard errors typically have an element of stupidity. :)

Enable/Disable PHP on Nginx for CDN

I have a server with Nginx installed.
I also have 2 domains pointing to that server. (domain1.com and domain2.com). The first domain (domain1.com) is the front website. The other domain (domain2.com) is the CDN for static content like: JS, CSS, images and font files.
I setup domains config files and everything is running fine. The nginx server has PHP running on it.
My question is: How to disable PHP on the second domain (domain2.com) unless the request has "?param=something" in the GET request?!
It will be something like:
// PHP is disabled
if($_GET['param']){
// Enable PHP
}
or should I use:
location ~ /something {
deny all
}
And keep PHP running?!
Note: I need php to process the param i pass to output some JS or CSS.
PHP with nginx is very different than PHP with Apache, since there is no mod_php equiv for nginx (AFAIK).
PHP is handled by totally separate daemon (php-fpm, or by passing the request to an apache server, etc.) As a result, you can bypass php completely simply by letting nginx handle the request without passing it off to php-fpm or apache. There is a good chance that your nginx configuration already is setup only handoff .php files to php-fpm.
Now, if you're trying to have requests such as /some-style.css?foo=bar get handled by php, then I'd suggest simply segregating static resources from dynamic ones.
You could create a third domain, or simply use two separate directories.
/static/foo.css
vs
/dynamic/bar.css?xyz=pdq
You could then handoff to php inside the location blocks.
location ~ /static {
try_files $uri =404;
}
location ~ /dynamic {
try_files $uri =404;
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
With the above configuration, requests starting with /static will bypass php regardless of file extension (even .php) and requests starting with /dynamic will be passed on the php-fpm regardless of file extension (even .css)

nginx mysterious error despite right configuration

I've installed nginx with php5-fpm and mysql and here is my configurations
root /var/www;
index index.html index.htm index.php
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
It started successfully and was downloading files instead of excuting .
I know its a problem with php-fpm engine.
But now the server stopped and respond with problem loading page instead of welcome to nginx that comes the first time
And at terminal I see
"user" directive makes sense only if the master process runs with super-user privileges,
nginx: [emerg] "fastcgi_pass" directive is not allowed here in/etc/nginx/sites-enabled/default~:68
SO please help to fix that problem.
The editor that you used to edit /etc/nginx/sites-enabled/default left a temporary file default~ (note this ~ suffix) in your /etc/nginx/sites-enabled/ directory. You should delete it.

*Actually* getting nginx to reload config

I'm trying to set up a subdomain with nginx on Ubuntu 13.04. I was actually able to do this before with:
server {
root /home/ajcrites/projects/site-a/public/;
server_name sub.localhost;
location / {
try_files $uri /index.php;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
#fastcgi_pass 127.0.0.1:9000;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
Then http://sub.localhost would work perfectly.
Now, I've copied the same config and just changed root (to a path that exists) and server_name to some other value like subber.localhost. No matter what I do, I can't get subber.localhost to work. Also no matter what I do, I can't get sub.localhost to not work while nginx is running. What's weird is that nginx actually will complain about errors in the config files, but even if there are none it doesn't matter. Only sub.localhost will work.
I've tried doing all of the following, repeatedly and with various combinations:
Create a separate file in sites-available. nginx seems to thing that it's there since it will complain about errors when I try to reload.
service nginx stop && service nginx start
nginx stop; nginx
nginx -s reload
When I stop it, pgrep does not show any nginx process, so it seems like it is correctly getting stopped. This does not appear to be a cache issue as a fresh browser instance reacts the same way.
I happen to have had similar problems, each time that was a simple issue (that took me a long time to find though). For instance (sorry if some points are obvious - but sometimes we focus on the difficult, ignoring the obvious - at least that's my case)
ensure the symlink is present in sites-enabled
root path has read access all the way from / (at at least x) for the nginx user
is subber.localhost well defined in /etc/hosts for local tests (or local DNS),
Maybe you could try to force the IP on which listen is listening. Eg instead of
listen 80
you could try
listen 127.0.0.1:80; # OR
listen 192.168.0.1:80; # you local address (or remote if that's the case)

Nginx + Zend Framework 2 + Run multiple websites with different domain from the same codebase

I have a modular Zend Framework application which I would like to use as a basis for several websites with different domains, i.e.:
mywebsite.com
mydifferentwebsite.com
mythirdwebsite.com
I want to run all of these websites from the same codebase and the same server. I would like to change different settings such as styles or available pages based on the hostname.
Currently this is how I am getting the hostname:
$requestSiteUrl = $this->serviceLocator->get('URLRequest')->getServer('HTTP_HOST');
However I am not sure how reliable this would be. Is there a better way to find out which website the request is coming from?
Here is an example Nginx config I am using for different websites on the server:
server {
listen 80;
server_name mywebsite.com;
root /var/www/mywebsite/public;
index index.php;
server_tokens off;
if (!-e $request_filename) {
rewrite ^.*$ /index.php last;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param APPLICATION_ENV develop;
fastcgi_param PHP_VALUE "error_log=/var/log/php/mywebsite_php_errors.log";
include /etc/nginx/fastcgi_params;
}
}
Doing:
$requestSiteUrl = $this->serviceLocator->get('URLRequest')->getServer('HTTP_HOST');
Is another way of doing:
$_SERVER['HTTP_HOST']
(but obviously works better when the variable isn't set). So yes it's a fine way of finding out which domain nginx thinks was requested. Technically there is a security hole if:
Someone just modifies the Host header on their request.
If you have the sites on different IP addresses and someone edits their hosts file to have the 'wrong' IP for a server, and then makes a request to that 'wrong' IP address.
Then you could have the wrong site name come up.
If all of your sites are on the same IP then there's not much you could do about that spoofing. If they're all on separate IP addresses, and you setup Nginx to use one server per host, and they're just listening on their allocated IP address, then you could change to using
$_SERVER['SERVER_NAME']
or the Zend equivalent.
That would then only give you the server name as defined by the server_name entry in your config file, and be unspoofable.
So long as you aren't making any security be dependent on having the correct server name, then you shouldn't have a problem. i.e. when people login to one site, they shouldn't be also logged into the other sites, as the session name should be unique per site.

Resources