nginx mysterious error despite right configuration - nginx

I've installed nginx with php5-fpm and mysql and here is my configurations
root /var/www;
index index.html index.htm index.php
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
It started successfully and was downloading files instead of excuting .
I know its a problem with php-fpm engine.
But now the server stopped and respond with problem loading page instead of welcome to nginx that comes the first time
And at terminal I see
"user" directive makes sense only if the master process runs with super-user privileges,
nginx: [emerg] "fastcgi_pass" directive is not allowed here in/etc/nginx/sites-enabled/default~:68
SO please help to fix that problem.

The editor that you used to edit /etc/nginx/sites-enabled/default left a temporary file default~ (note this ~ suffix) in your /etc/nginx/sites-enabled/ directory. You should delete it.

Related

How to debug and find default document root for sub domains in nginx?

I am unable debug which default document root for one of sub domain(s) configured in nginx, I have tried all valid solution described here.
Web server running on nginx version: nginx/1.10.3
Problem is I need to figure out, where is root dir for this URL http://w.knexusgroup.com/ or http://w.knexusgroup.com/index.php it prints 1
On /usr/share/nginx/html/index.php I have written 2.
Below is some snippet for nginx conf:
include /etc/nginx/static/*.conf;
include /etc/nginx/saas_clients/*.conf;
index index.php index.html index.htm;
server {
listen 80;
server_name localhost;
root /mnt/codebase/httpdocs/saas;
location / {
root /usr/share/nginx/html;
index index.php index.html index.htm;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
root html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}
.
.
.
nginx -V
nginx version: nginx/1.10.3
built by gcc 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC)
built with OpenSSL 1.0.1k-fips 8 Jan 2015
TLS SNI support enabled
configure arguments: --prefix=/usr/share/nginx
Seems you have overridden root.
Steps to debug this kind of problem:
Try to create 1 virtual host file with w.knexus.
Restart nginx. This time it will report duplicate entry.
Pay attention on the config which is not newly created.
Once you identified config, you can remove the newly created conf.
See the root on conf:
a. There might be multiple root based on location, so make sure you are seeing right root path.
b. You might have proxies, so it might need further debugging inside proxy app as well
cheers

nginx configuratio : alias and 404 error

My actual problem was that I wanted to make my "site.com/blog/index.php" direct to "/srvX/www/blog/caller/index.php". Althought it would be very straightforward to direct to "/srv/www/blog/index.php" using "root /srv/www/", that's not what I wanted. I discovered "alias", and it seem to do what I want.
1)First try :
server {
listen 80;
server_name _;
root /srv/www/blog/pages;
index index.php;
location /blog {
alias /srv/www/blog/caller;
}
}
There trying site.com/blog get me a 404 not found, and nothing pop into /var/log/nginx/error.log
1)Second try to know what happens :
If I change "alias /srv/www/blog/caller;" to a bad path, let say "alias /srvX/www/blog/caller;" I actually got the same behaviour in my browser, but
I can see in /var/log/nginx/error.log :
[error] 7229#0: *1 "/srvX/www/blog/caller/index.php" is not found (2: No such file or directory), client: 192.168.1.200, server: 192.168.1.221, request: "GET /blog/ HTTP/1.1", host: "192.168.1.221"
Conclusion : I don't know what's hapenning there : it seem clear that nginx get the file in my first try, but it sends the 404 error to the browser with no reason I could think of, while when specyfiyng a wrong path, it tells me right away. :/*
edit
Well, I found the solution. Basically it totally works from nginx, the problem was from php-fpm who lose his mind when using alias into nginx. What you need to do is doing a sublocation of aliased locations adding :
location ~ \.php$ {
fastcgi_pass unix:/var/run/php5-fpm.sock;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $request_filename;
}
Now it works.
The fact that nginx was giving a 404 error without anything in the nginx's logs, was that php-fpm was the one failing to serve.
The problem is that you have no instructions on how to deal with the php script. To solve this issue the following:
Add the following code to your nginx.conf file within the server tags or if you have created that in your conf.d folder add it to that file.
location / {
try_files $uri $uri/ /index.php?q=$request_uri;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
That will solve that problem but also in the file:
/etc/php-fpm.d/www.conf
Ensure that listen.owner is set to listen.owner = nginx
Ensure that listen.group is set to listen.group = nginx
Restart both services and it should work.
If not ensure your document root and all files with that directory are owned by the user nginx and the group nginx.
If not you can do this by using the following:
chown -R nginx:nginx documentroot
And keep doing that but adding /* each time until you reach an error.
Hope everything works out for you!!

How to enable xdebug with nginx?

My situation is the following:
I have a VM (Ubuntu server 13.04) with PHP 5.4.9-4ubuntu2.2, nginx/1.2.6, php5-fpm and Xdebug v2.2.1.
I'm developing an app using PhpStorm 6.0.3 (which I deploy on the VM).
My problem is, whenever I try to start a debugging session, the IDE never gets a connection request from the webserver (And thus, the session never starts).
I looked through a lot of recommendations about xdebug configuration and found nothing useful.
What I recently realized is that if I set the XDEBUG_SESSION cookie myself through the browser (Thanks FireCookie) I can debug my app... so my guess is there's something keeping the webserver from sending the cookie back to the client.
The thing is, I'm using the same IDE configuration in a different project, which is deployed into a different CentOS based VM (with lighttpd), and it works just fine.
I tried to deploying my current project into such VM (changing the webserver to NginX) and it worked allright (Unfortunately I lost that VM and can't check the config :().
So... here's my NginX config:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
location / {
try_files $uri $uri/ /dispatch.php;
}
#
location ~ \.php$ {
root /var/www/bresson/web;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index dispatch.php;
fastcgi_param SCRIPT_FILENAME /var/www/$fastcgi_script_name;
include fastcgi_params;
#fastcgi_pass 127.0.0.1:9009;
}
}
fpm config (/etc/php5/fpm/pool.d/www.conf):
listen = /var/run/php5-fpm.sock
xdebug.ini:
zend_extension=/usr/lib/php5/20100525/xdebug.so
xdebug.remote_port=9000
xdebug.remote_enable=On
xdebug.remote_connect_back=On
xdebug.remote_log=/var/log/xdebug.log
Any idea will be much appreciated. Thanks!
EDIT:
Another thing I tried was to start a session from php and I saw that the session cookie was created without any problem...
2nd Edit:
I think I found where the problem is: the URI.
I wrote another script in order to try configuration parameters and stuff (A much simpler one), and it worked right out!.
So eventually I figured the problem was that the query parameters (i.e.: XDEBUG_SESSION_START=14845) were not reaching my script.
The problem is my starting URI, which is of the form /images/P/P1/P1010044-242x300.jpg. Through some virtual host configuration I should be able to route it to something of the like /dispatch.php/images/P/P1/P1010044-242x300.jpg, and use the rest of the URI as parameters. So... I haven't found a solution per se, but now I have a viable workaround (pointing my starting URL to /dispatch.php) which will do it for a while. Thanks
Just in case there's someone reading this... I got it!
The problem was nginx's configuration. I had just copied a template from somewhere, but now I read a little more and found out that my particular config was much simpler:
location / {
root /var/www/bresson/web/;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/dispatch.php;
fastcgi_pass unix:/var/run/php5-fpm.sock;
}
In my case, every request has to be forwarded to my front-controller (which then analyzes the URI), so it was really simple.

*Actually* getting nginx to reload config

I'm trying to set up a subdomain with nginx on Ubuntu 13.04. I was actually able to do this before with:
server {
root /home/ajcrites/projects/site-a/public/;
server_name sub.localhost;
location / {
try_files $uri /index.php;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
#fastcgi_pass 127.0.0.1:9000;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
Then http://sub.localhost would work perfectly.
Now, I've copied the same config and just changed root (to a path that exists) and server_name to some other value like subber.localhost. No matter what I do, I can't get subber.localhost to work. Also no matter what I do, I can't get sub.localhost to not work while nginx is running. What's weird is that nginx actually will complain about errors in the config files, but even if there are none it doesn't matter. Only sub.localhost will work.
I've tried doing all of the following, repeatedly and with various combinations:
Create a separate file in sites-available. nginx seems to thing that it's there since it will complain about errors when I try to reload.
service nginx stop && service nginx start
nginx stop; nginx
nginx -s reload
When I stop it, pgrep does not show any nginx process, so it seems like it is correctly getting stopped. This does not appear to be a cache issue as a fresh browser instance reacts the same way.
I happen to have had similar problems, each time that was a simple issue (that took me a long time to find though). For instance (sorry if some points are obvious - but sometimes we focus on the difficult, ignoring the obvious - at least that's my case)
ensure the symlink is present in sites-enabled
root path has read access all the way from / (at at least x) for the nginx user
is subber.localhost well defined in /etc/hosts for local tests (or local DNS),
Maybe you could try to force the IP on which listen is listening. Eg instead of
listen 80
you could try
listen 127.0.0.1:80; # OR
listen 192.168.0.1:80; # you local address (or remote if that's the case)

Nginx odd caching or something after enabling php?

I'm not really sure what to consider this under, or the cause, so I'm sorry if the title is misleading.
I just installed Nginx for the first time and out of curiosity tried to see if I could get some popular forum software to work properly. I first tried installing Vbulletin 4, as this is what one community I host uses. PHP is being handled by php-fpm. I could get any custom page to display some simple php echo I just wrote, with any filename or directory. http://example.com/test/test.php or http://example.com/test.php, for instance.
However, when I went to try to install vbulletin through their install script, located at http://example.com/install/install.php, the file would just download. I double and triple checked the Nginx config for this domain, and everything seemed like it should work.
After downloading install.php a few times, I decided to attempt visiting the page in a Chrome Incognito window. Lo and behold, the install.php page no longer downloaded and the installer was prompting me for my customer id # as it should have. Then I go back to my main Chrome (not in incognito) window and try to visit the install page again, install.php gets downloaded again!
Here's the config I was using at the time:
server {
listen ip:80;
server_name my.domain.com;
location / {
root /usr/share/nginx/html;
try_files $uri $uri/ /index.php;
}
location ~ \.php$ {
root /usr/share/nginx/html;
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_script_name;
}
}
Any insight on the cause of this issue? I can't imagine why it would serve a download of the php file for one session and then actually serve the dynamic content for another. I don't want any files accidentally getting downloaded by some random user.
Your fastcgi_params look a little off. You have set PATH_INFO as the script name.
Try:
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $document_root

Resources