I have around 1300vhosts in one nginx conf file. All with the following layout (they are listed after each other in the vhost file).
Now my problem is that sometimes my browser redirects site2 to site1. For some reason, while the domain names don't event match.
It looks like nginx is always redirecting to the first site in the vhosts file.
Somebody that know what this problem can be?
server {
listen 80;
server_name site1.com;
rewrite ^(.*) http://www.site1.com$1 permanent;
}
server {
listen 80;
root /srv/www/site/public_html/src/public/;
error_log /srv/www/site/logs/error.log;
index index.php;
server_name www.site1.com;
location / {
if (!-e $request_filename) {
rewrite ^.*$ /index.php last;
}
}
location ~ .(php|phtml)$ {
try_files $uri $uri/ /index.php;
fastcgi_param SCRIPT_FILENAME /srv/www/site/public_html/src/public$fastcgi_script_name;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
}
}
server {
listen 80;
server_name site2.com;
rewrite ^(.*) http://www.site2.com$1 permanent;
}
server {
listen 80;
root /srv/www/site/public_html/src/public/;
error_log /srv/www/site/logs/error.log;
index index.php;
server_name www.site2.com;
location / {
if (!-e $request_filename) {
rewrite ^.*$ /index.php last;
}
}
location ~ .(php|phtml)$ {
try_files $uri $uri/ /index.php;
fastcgi_param SCRIPT_FILENAME /srv/www/site/public_html/src/public$fastcgi_script_name;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
}
}
EDIT
Maybe another thing to mention is that, I reload all this vhosts every 2 minutes with nginx -s reload.
On the first tests it looks like the redirection only happens when reloading... Going to do some more tests, but this could be helpful..
Reference (how nginx handles request): http://nginx.org/en/docs/http/request_processing.html
In this configuration nginx tests only the request’s header field
“Host” to determine which server the request should be routed to. If
its value does not match any server name, or the request does not
contain this header field at all, then nginx will route the request to
the default server for this port.
the default server is the first one — which is nginx’s standard
default behaviour
Could you check the host header of those bad requests?
Also you can create an explicit default server to catch all of these bad requests, and just log the request info (i.e, $http_host) into a different error log file for investigation.
server {
listen 80 default_server;
server_name _;
error_log /path/to/the/default_server_error.log;
return 444;
}
[UPDATE] As you are doing nginx -s reload and you have so many domains in that nginx conf file, the following is possible:
A reload works like this
starting new worker processes with a new configuration, graceful shutdown of old worker processes
So old workers and new workers could co-exist for a while. For example, when you add a new server block (with new domain name) to your config file, during the reloading time, the new workers will have the new domain and the old one will not. When the request happens to be sent by the old worker process, it will be treated as unknown host and served by the default server.
You said that it's done every 2 minutes. Could you run
ps aux |grep nginx
and check how long each worker is running? If it's much more than 2 minutes, the reloading may not work as you expected.
Related
The environment is as follows:
I have https://website.com and a blog at https://website.com/blog
The root path points to a Passenger-hosted Rails app, and the blog subdirectory points to a WordPress app via php-fpm
Everything works fine with my Nginx config, but when I try to change the permalink structure to anything other than "Plain", I get a 404 page from the Rails app as if the location blocks aren't utilized. I tried looking at the error log in debug mode, and I do see it attempting to try_files, but ultimately it fails with the Rails 404 page.
It may be worth noting that the entire site is behind Cloudflare. Not sure if it could be something with that, though I kind of doubt it.
Here is the almost-working Nginx config I'm using:
server {
listen 80 default_server;
server_name IP_ADDRESS;
passenger_enabled on;
passenger_app_env production;
passenger_ruby /home/ubuntu/.rbenv/shims/ruby;
root /web/rails/public;
client_max_body_size 20M;
location ^~ /blog {
passenger_enabled off;
alias /web/blog;
index index.php index.htm index.html;
# Tried the commented line below, but then nothing works.
# try_files $uri $uri/ /blog/index.php?$args;
# The line below works, but peramlinks don't.
try_files $uri $uri/ /blog/index.php?q=$uri&$args;
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass unix:/run/php/php7.3-fpm.sock;
# Tried the commented line below, but then nothing works
# fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# The line below works, but peramlinks don't.
fastcgi_param SCRIPT_FILENAME $request_filename;
}
}
}
I wanted to comment in short but I don't have enough reputation for that.
I used the following block and worked for me. I added an add_header directive just to debug that if my request is reaching the correct block.
location ^~ /blog {
try_files $uri $uri/ /index.php?$args;
add_header reached blog;
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass php;
}
}
If your server is behind CloudFlare, you can try with /etc/hosts entry on your local machine if you're using Ubuntu/Mac. Which will stop the DNS lookup and site will directly be accessed from the IP address.
Check if any redirects are happening due to any other Nginx configuration.
Also, you have mentioned in the question that site is https:// while your server block has only listen 80 meaning non HTTPS.
Check for the response headers with
curl -XGET -IL site-name.tld
which may help you more debugging the situation.
Difference between alias and root directives https://stackoverflow.com/a/10647080/12257950
I am developing a website, and I just installed ssl on the production website (I have never done this before). When I load the development website the page redirects to https and breaks because https isn't installed on the development site.
Development url: http://local.ezel.io
Production url: https://ezel.io
The Nginx (production):
server{
listen 80;
server_name ezel.io;
root /var/www/ezel.io/public;
location ~ /.well-known {
allow all;
}
rewrite ^ https://$server_name$request_uri? permanent;
}
The Nginx (development):
server {
listen 80;
server_name local.ezel.io;
root /home/ryan/Documents/www/ezel.io/public;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
On my development machine, I also have the following in my hosts file:
127.0.0.1 local.ezel.io
What would be causing me to go from http://local.ezel.io to https://local.ezel.io?
I think the problem is that you enabled https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security once and now your browser insists on trying HTTPS.
Try this: http://classically.me/blogs/how-clear-hsts-settings-major-browsers
Also, try pinging local.ezel.io to ensure it's really your localhost and not actually ezel.io.
I'm migrating from Apache to nginx and need to convert massive httaccess file to nginx format.
I found 2 ways that work, which one should I use?
location = /test.html { rewrite ^(.*)$ /index.php?action=temp&name=test; }
or just
rewrite ^/test.html$ /index.php?action=temp&name=test;
I'm putting this all in the file (ez_rewrite_list.conf) and then include in the virtual.conf. Where should I put that file location wise? Does it matter? am I doing right? any tips
server {
listen 80;
server_name test.com;
location / {
root /var/www/com/mysite;
index index.php index.html index.htm;
}
include /etc/nginx/ez_conf/ez_rewrite_list.conf;
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
root /var/www/com/mysite;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME document_root$fastcgi_script_name;
include fastcgi_params;
}
}
You've forgot to escape dot (.) in rewrite and they are not exactly the same.
Technically exact location should be a little faster than checking regexp for every request. Also you don't need to capture anything in rewrite inside the location, so I would use
location = /test.html {
rewrite ^ /index.php?action=temp&name=test;
}
But actually, you'll never see any difference.
I moved from the setup of Apache 2 + Varnish to Nginx alone, and I'm kinda stuck with how I should setup/use ESI as well as fastcgi_cache in this setup.
First of all, the idea of ESI was that we setup a reverse proxy layer in front of the server to cache the cache-able parts of a page, then using esi to retrieve the dynamic parts. In my previous setup Varnish was acting as the reverse proxy and Apache only handles the esi requests when necessary.
My question is that now with Nginx acting as the sole server here, how do I make it to work? Do I need to setup another Nginx instance running as a reverse proxy server or something? I couldn't find any document on this.
The second question is regarding fastcgi_cache. I have set it up as described below but the cache does't seem to work for me, no cache file populated and I always get "MISS". I wonder if it's because I need to set max-age/shared-max-age in each controller for each to work?
fastcgi_cache_path /run levels=1:2 keys_zone=www_mysite_com:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache_use_stale error timeout invalid_header http_500;
server {
#listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/mysite.com/w/w/w/www/web;
index index.php index.html index.htm;
# Make site accessible from http://www.mysite.com
server_name www.mysite.com;
# Specify a character set
charset utf-8;
# strip app.php/ prefix if it is present
rewrite ^/app\.php/?(.*)$ /$1 permanent;
# h5bp nginx configs
# include conf/h5bp.conf;
location / {
index app.php;
try_files $uri #rewriteapp;
}
location #rewriteapp {
rewrite ^(.*)$ /app.php/$1 last;
}
# Deny access to .htaccess
location ~ /\.ht {
deny all;
}
# Don't log robots.txt or favicon.ico files
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { access_log off; log_not_found off; }
# 404 errors handled by our application, for instance Symfony
error_page 404 /app.php;
# pass the PHP scripts to FastCGI server from upstream phpfcgi
location ~ ^/(app|app_dev|backend/app|backend/app_dev|config)\.php(/|$) {
fastcgi_split_path_info ^(.+\.php)(/.*)$;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME web/$fastcgi_script_name;
fastcgi_param HTTPS off;
fastcgi_cache www_mysite_com;
fastcgi_cache_valid 200 60m;
}
# Only for nginx-naxsi : process denied requests
#location /RequestDenied {
# For example, return an error code
#return 418;
#}
# redirect server error pages to the static page /50x.html
#
#error_page 500 502 503 504 /50x.html;
}
By default, responses from the Symfony 2 application have a cache control header that disables caching:
Cache-Control: no-cache
If you would like nginx to cache pages you will have to change those headers.
You can find general information about caching in the documentation
The simplest solution is to use the SymfonyFrameworkExtraBundle (you already have it if you use the SF2 standard edition) and use annotations on your controllers and/or actions to specify the cache headers. You can find more info about this approach it the docs for the #Cache annotation.
I am replacing lighttpd with nginx on my development server. I got it working with PHP and SSL, but I'm stumped by what should be a simple rewrite. I need to rewrite URLs from
http[s]://dev.foo.com/signup/123456
to
http[s]://dev.foo.com/signup/index.php?attcode=123456
The rule I am using is:
rewrite ^/signup/([0-9]+)$ /signup/index.php?attycode=$1 last;
I have tried numerous variations on this, moved it around, put it inside a location block. What happens is the URL is rewritten to:
http://dev.foo.com/dev.foo.com/signup/123456
The hostname is inserted, and it seems to always lose https and go to http.
My nginx.com server section is below. I have read and re-read the nginx docs (as they are) and searched the nginx mailing list, but nothing I've tried has solved this problem.
Ubuntu 8.0.4 LTS in case that matters.
Thanks.
server {
listen 80;
listen 443 default ssl;
server_name dev.foo.com dev.bar.com localhost;
root /var/www/foo;
index index.php index.html;
# ssl cert stuff omitted
charset utf-8;
access_log /var/log/www/dev.access.log main;
location ~ /\. {
deny all;
}
location ~* ^.+\.(inc|tpl|sql|ini|bak|sh|cgi)$ {
deny all;
}
location ~* ^/(scripts|tmp|sql)/ {
deny all;
}
rewrite ^/robots.txt$ /robots_nocrawl.txt break;
rewrite ^/signup/([0-9]+)$ /signup/index.php?attycode=$1 last;
location / {
try_files $uri $uri/ /error_404.php;
}
location ~ \.php$ {
fastcgi_pass localhost:51115;
fastcgi_index index.php;
fastcgi_intercept_errors on;
include fastcgi_params;
fastcgi_param SERVER_NAME $http_host;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
error_page 404 /error_404.php;
}
Don't put HTTP and HTTPS in the same server block. Separate them into two almost-identical server blocks, one for HTTP and one for HTTPS. Otherwise you will confuse all kinds of Nginx internals.