I'm using this configuration on a fresh install of php5-fpm and nginx on ubuntu 13.04:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.php index.html index.htm;
server_name localhost;
location / {
try_files $uri $uri/ /index.html;
}
location /doc/ {
alias /usr/share/doc/;
autoindex on;
allow 127.0.0.1;
allow ::1;
deny all;
}
error_page 404 /404.html;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
# With php5-cgi alone:
fastcgi_pass 127.0.0.1:9000;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
But, my web browser is seeing php as text instead of the executed results. Where should I look to troubleshoot?
Your php code is being displayed directly because it's not being sent to the php engine, that means the location block is being matched and the php file is being served, but the php file isn't being captured by the php block, so your problem is in the php block.
In that block you have 2 fastcgi_pass, one with a port (9000) and the other to a unix socket, you can't have both together, but since you've tagged your question with fastcgi so I'll assume you are using fastcgi, try commenting this line
#fastcgi_pass unix:/var/run/php5-fpm.sock;
It sounds like you are getting the wrong Content-Type header set. You can check this with various tools. For example, open the Developer Tools "Network" tab in Chrome, and then request the page. You'll see the "Content Type" returned in one of the columns, and you can click on the request in the left column to see the full Response headers. I suspect you'll find the header being returned is either "text/plain" or "application/octet-stream" instead of text/html, which is probably want you want.
Nginx usually sets a default Content-Type header based on the extension. This is done with the types directive, which I don't see mentioned above, so you may wish to check your settings there to confirm that the php extension is mapped to text/html. Explicitly setting a Content-Typeheader in your application may also help.
I was able to fix this by updating my nginx vhost by changing
default_type application/octet-stream;
to
default_type text/html;
Related
The environment is as follows:
I have https://website.com and a blog at https://website.com/blog
The root path points to a Passenger-hosted Rails app, and the blog subdirectory points to a WordPress app via php-fpm
Everything works fine with my Nginx config, but when I try to change the permalink structure to anything other than "Plain", I get a 404 page from the Rails app as if the location blocks aren't utilized. I tried looking at the error log in debug mode, and I do see it attempting to try_files, but ultimately it fails with the Rails 404 page.
It may be worth noting that the entire site is behind Cloudflare. Not sure if it could be something with that, though I kind of doubt it.
Here is the almost-working Nginx config I'm using:
server {
listen 80 default_server;
server_name IP_ADDRESS;
passenger_enabled on;
passenger_app_env production;
passenger_ruby /home/ubuntu/.rbenv/shims/ruby;
root /web/rails/public;
client_max_body_size 20M;
location ^~ /blog {
passenger_enabled off;
alias /web/blog;
index index.php index.htm index.html;
# Tried the commented line below, but then nothing works.
# try_files $uri $uri/ /blog/index.php?$args;
# The line below works, but peramlinks don't.
try_files $uri $uri/ /blog/index.php?q=$uri&$args;
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass unix:/run/php/php7.3-fpm.sock;
# Tried the commented line below, but then nothing works
# fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# The line below works, but peramlinks don't.
fastcgi_param SCRIPT_FILENAME $request_filename;
}
}
}
I wanted to comment in short but I don't have enough reputation for that.
I used the following block and worked for me. I added an add_header directive just to debug that if my request is reaching the correct block.
location ^~ /blog {
try_files $uri $uri/ /index.php?$args;
add_header reached blog;
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass php;
}
}
If your server is behind CloudFlare, you can try with /etc/hosts entry on your local machine if you're using Ubuntu/Mac. Which will stop the DNS lookup and site will directly be accessed from the IP address.
Check if any redirects are happening due to any other Nginx configuration.
Also, you have mentioned in the question that site is https:// while your server block has only listen 80 meaning non HTTPS.
Check for the response headers with
curl -XGET -IL site-name.tld
which may help you more debugging the situation.
Difference between alias and root directives https://stackoverflow.com/a/10647080/12257950
I'm trying to install Wordpress on a Ubuntu 18.04 on a subdomain. I set the Nginx files on sites-available, but I get a 502 error on browser because Wordpress is using a .php file type for the index, so I added "index.php" on the list in sites-available. Well after adding "index.php" on the list when I try to access the URL in browser it downloads a file named with the subdomain address.
Here's my code in sites-available
server {
listen 80;
listen [::]:80;
root /var/www/apt;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html index.php;
server_name apt.forrum.ro;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
}
Please let me know how to fix it.
This is simplified, basically Nginx uses the try_files directive to serve the file to user in the folder. This is why your php file is being sent to the user, it's then downloaded rather than shown as browsers don't really know how to show PHP to the user.
What you need to do is tell Nginx to run the file. In the case of PHP you can use FastCGI. There are many guides to doing this on ubuntu such as This One.
Once you have it installed, all the directives for FastCGI are described by Nginx themselves Here.
Their example is posted here:
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
# Mitigate https://httpoxy.org/ vulnerabilities
fastcgi_param HTTP_PROXY "";
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
# include the fastcgi_param setting
include fastcgi_params;
# SCRIPT_FILENAME parameter is used for PHP FPM determining
# the script name. If it is not set in fastcgi_params file,
# i.e. /etc/nginx/fastcgi_params or in the parent contexts,
# please comment off following line:
# fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
I have a local NginX testing server on my Windows 10 machine. This is just for creating and testing websites, it is not served to the internet.
I've been testing one site successfully at localhost for a while, but now I want to add a second test site. I thought I could achieve this by duplicating the server{} block in the nginx.conf file and changing the name of the server_name and a few other parameters, but that it doesn't seem to work. When I try to load my second test site in Chrome, I get this error:
This site can’t be reached
local_test_2’s server DNS address could not be found.
My site at localhost still works, though.
Why is my second test site not working?
Here's my current nginx.conf file:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type text/html;
sendfile on;
keepalive_timeout 65;
server {
#Server basics
server_name localhost;
listen 80;
index index.html index.php;
root c:/nginx/html;
location / {
try_files $uri $uri/ /index.php?_url=$uri&$query_string;
}
location ~ .(php|htm|html)$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME c:/nginx/html/$fastcgi_script_name;
include fastcgi_params;
}
}
server {
#Server basics
server_name local_test_2;
listen 80;
index index.html index.php;
root "C:\Users\User Name\Documents\Test\example.com";
location / {
try_files $uri $uri/ /index.php?_url=$uri&$query_string;
}
location ~ .(php|htm|html)$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME c:/nginx/html/$fastcgi_script_name;
include fastcgi_params;
}
}
}
Update:
My C:\Windows\System32\drivers\etc\hosts file has the following:
# localhost name resolution is handled within DNS itself.
# 127.0.0.1 localhost
# ::1 localhost
The current 'localhost' specification is commented out. Should I change this file?
You need to add local_test_2 in your windows host file: at
C:\Windows\System32\drivers\etc\hosts
In host file add below line at the last
127.0.0.1 local_test_2
Also you can check reference to setup new host in nginx at: Setting up Nginx on local machine
The local_test_2 is a url that you created for testing purpose. Since you didn't buy it from some registrar, no DNS provider will be able to resolve the url to the ip address.
Every operating system has a hosts file(in linux it will be /etc/hosts) which can be used to map the urls to ip address without the use of some online DNS service. So in your case you can append the following line,
127.0.0.1 local_test_2
which tells to route all requests to local_test_2 to the same machine(127.0.0.1). No other changes are required in the hosts file.
Refer this link for more details on hosts files and different files used in different operating systems.
what i am missing?
lets say this simple conf:
upstream php {
server 111.1111.1111.1111:9000;
}
server {
listen 80 reuseport;
root /var/www/html/public/;
index index.php;
location / {
set $orig_uri $uri;
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
ssi on;
include snippets/fastcgi-php.conf;
fastcgi_param REQUEST_URI $orig_uri$is_args$args;
fastcgi_cache_key "$scheme$request_method$orig_uri$is_args$args";
add_header X-Cache-Key $scheme$request_method$orig_uri$is_args$args;
add_header X-Cache $upstream_cache_status;
# With php5-cgi alone:
#fastcgi_pass 127.0.0.1:9000;
# With php5-fpm:
#fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_pass php;
}
}
how does nginx gonna access "/var/www/html/public/" on different server? how will it know if the file exists or not exists? I tried to play with it and I always get
404 Not Found
So i am missing something but cant understand what, if the php servers dont have nginx installed
How does nginx gonna access "/var/www/html/public/" on different
server? how will it know if the file exists or not exists?
nginx cannot determine if the file exists on the upstream PHP service. It will pass a script pathname to the PHP service and if it does not exist, the PHP service will return a 404 response.
The PHP service probably uses the fastcgi_param SCRIPT_FILENAME parameter to find the upstream script file. The parameter should be defined in your snippets/fastcgi-php.conf file.
This is usually set to a value of $document_root$fastcgi_script_name. The value for $document_root being set by the root directive in your configuration file.
This will only work if the scripts on the upstream server are placed in the exact same directory hierarchy as specified by the nginx server. Otherwise, you may need to handcraft the value of the SCRIPT_FILENAME parameter.
I moved from the setup of Apache 2 + Varnish to Nginx alone, and I'm kinda stuck with how I should setup/use ESI as well as fastcgi_cache in this setup.
First of all, the idea of ESI was that we setup a reverse proxy layer in front of the server to cache the cache-able parts of a page, then using esi to retrieve the dynamic parts. In my previous setup Varnish was acting as the reverse proxy and Apache only handles the esi requests when necessary.
My question is that now with Nginx acting as the sole server here, how do I make it to work? Do I need to setup another Nginx instance running as a reverse proxy server or something? I couldn't find any document on this.
The second question is regarding fastcgi_cache. I have set it up as described below but the cache does't seem to work for me, no cache file populated and I always get "MISS". I wonder if it's because I need to set max-age/shared-max-age in each controller for each to work?
fastcgi_cache_path /run levels=1:2 keys_zone=www_mysite_com:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache_use_stale error timeout invalid_header http_500;
server {
#listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/mysite.com/w/w/w/www/web;
index index.php index.html index.htm;
# Make site accessible from http://www.mysite.com
server_name www.mysite.com;
# Specify a character set
charset utf-8;
# strip app.php/ prefix if it is present
rewrite ^/app\.php/?(.*)$ /$1 permanent;
# h5bp nginx configs
# include conf/h5bp.conf;
location / {
index app.php;
try_files $uri #rewriteapp;
}
location #rewriteapp {
rewrite ^(.*)$ /app.php/$1 last;
}
# Deny access to .htaccess
location ~ /\.ht {
deny all;
}
# Don't log robots.txt or favicon.ico files
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { access_log off; log_not_found off; }
# 404 errors handled by our application, for instance Symfony
error_page 404 /app.php;
# pass the PHP scripts to FastCGI server from upstream phpfcgi
location ~ ^/(app|app_dev|backend/app|backend/app_dev|config)\.php(/|$) {
fastcgi_split_path_info ^(.+\.php)(/.*)$;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME web/$fastcgi_script_name;
fastcgi_param HTTPS off;
fastcgi_cache www_mysite_com;
fastcgi_cache_valid 200 60m;
}
# Only for nginx-naxsi : process denied requests
#location /RequestDenied {
# For example, return an error code
#return 418;
#}
# redirect server error pages to the static page /50x.html
#
#error_page 500 502 503 504 /50x.html;
}
By default, responses from the Symfony 2 application have a cache control header that disables caching:
Cache-Control: no-cache
If you would like nginx to cache pages you will have to change those headers.
You can find general information about caching in the documentation
The simplest solution is to use the SymfonyFrameworkExtraBundle (you already have it if you use the SF2 standard edition) and use annotations on your controllers and/or actions to specify the cache headers. You can find more info about this approach it the docs for the #Cache annotation.