routing subdomain to subdirectory on nginx - nginx

Please help as explicitly as possible. I have set up a domain on a home server running nginx on Ubuntu 15, and I have the dns pointed to it. I can use the domain to access the site and if I append /subdirectory to it, I am able to launch the pages inside the subdirectories. What I am trying to do is get the subdomains to go directly to the correct root. Ie: mysite.com = /index.htm, subdomain.mysite.com = ./subdirectory where files are located.
I have tried every suggestion including those popular and those criticized, and either I get an error restarting Nginx or I get a "server not found" error. I´ve tried setting up cname aliases with my DNS server, and that doesn´t work either.
The working config file is below:
##
server {
server_name "~^(?<sub>.+)\.domain\.tld$";
index index.php index.html index.htm;
root //media/user/ednet/$sub;
ssl_certificate REMOVED FOR SECURITY
ssl_certificate_key REMOVED FOR SECURITYY
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!Anull:!md5;
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
# root //media/user/ednet;
# Add index.php to the list if you are using PHP
index index.php index.html index.htm;
#=========================Locations=============================
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
# root /usr/share/nginx/html;
}
#============================================================PHP=========================================================
location ~ \.php$ {
try_files $uri =404;
# fastcgi_split_path_info ^(.+\.php) (/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
====================
I´ve tried proxies, redirects, and everything else I can think of--including reading the instructions in the Nginx manual and the wiki.
Thanks

A quick "server not found" by the browser suggests that DNS is not set up. Your subdomains need to resolve to your server's public IP address using either:
specific CNAME record pointing to the main domain specific A record
pointing to the server's IP address wild (catch-all) A record
pointing to the server's IP address
Changes to DNS can take many hours to propagate.
An error restarting nginx suggests a syntax error in the configuration file.
Your configuration file contains no obvious syntax errors, but there are a few strange elements:
root directive appears twice
index directive appears twice
root directive has two leading // where usually one will do
You have IPv4 and IPv6 configured which is fine if you have an IPv6 connection.
You have this server marked as a default_server which means that it will be used even if the server_name regex does not match. Therefore, if you present to the server with a subdomain that returns files from the root, this implies that the regex failed to match but the server block was used by default. In which case, check the regex.
In summary, the first priority is to fix the DNS. Nothing will work unless the subdomains resolve to the IP address of the server.

Related

Nginx SSL get blank page

I'm trying for hours to put SSL to work on nginx without success. I have already setup all things correct but i'm getting blank page when i trying to access in https. website on http work fine.
I got certificate and key from cloudflare (i already setup with it some time ago and i remember this part).
I have setup the certificate and key on /etc/nginx/ssl and my configuration of server looks like:
server {
listen 443 default default_server;
listen [::]:443 ssl default_server;
server_name myhost.com;
ssl_certificate /etc/nginx/ssl/certificate.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
root /var/www;
index index.php index.html index.htm index.nginx-debian.html;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.4-fpm.sock;
}
}
I got blank page when access on https, and i don't get any erros and logs on erros/access.logs from nginx..
could someone help me?

Nginx serving wrong certificate and site only when access from particular IP

I have a wordpress site set up on NGINX with a valid certificate from certbot where I am able to see it from every IP except my home IP. When I use proxies or a different internet connection, the wordpress site is displayed correctly. I can also access the site with just HTTP and can see it from every IP except my home IP. I bypassed wordpress and made a simple echo PHP file to make sure that no plugin was causing the problem. Still, the echo shows up everywhere except from my home IP address. My conclusion is that Nginx is somehow restricting my IP, but I have no idea of where to start looking for this at. Are there any ideas about what might be going on? I do not have fail2ban installed nor do I have any known firewall rules that would cause this. Any ideas as to what is going on?
In the nginx log, the error states:
access forbidden by rule, client: xxxx:xxxx:xxxx:xxxx
However, there are no set rules for something like this in the config that i see:
server {
listen 80;
index index.php index.html index.htm index.nginx-debian.html;
server_name <redacted>;
root /var/www/<redacted>;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors on;
include fastcgi_params;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
listen 443 ssl;
ssl_certificate <redacted>;
ssl_certificate_key <redacted>;
include <redacted>;
ssl_dhparam <redacted>;
}

Forge / Nginx DO WWW to non-WWW redirect issue

I just transferred a site to a DO server provisioned by Forge. I installed an SSL certificate and noticed that navigating to https://www.example.com results in a Server Not Found error, while http://example.com returns 200. I attempted to force non-WWW in the Nginx config file but cannot seem to make anything work. I also restarted Nginx after every attempt.
Here is my current Nginx config file:
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
server_name .example.com;
root /home/forge/default/current/public;
# FORGE SSL (DO NOT REMOVE!)
ssl_certificate /etc/nginx/ssl/default/56210/server.crt;
ssl_certificate_key /etc/nginx/ssl/default/56210/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/default-error.log error;
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
The server was set up with the default site - default, rather than example.com. I realized this after launching the site to production and installing the SSL cert, I am trying to avoid any downtime by trying to change this after the fact. I am not sure if the site being called default makes any difference here, but it's key to note.
So, https:// or http://example.com works fine. www.example.com returns a Server Not Found error on all browsers I've tested. I also noticed that there is a www.default file in /etc/nginx/sites-enabled, I tried changing it to the following and restarting nginx:
server {
listen 80;
server_name www.example.com;
return 301 $scheme://example.com/$request_uri;
}
Still receiving Server Not Found no matter what. Here is the error on Chrome:
The server at www.example.com can't be found, because the DNS lookup failed. DNS is the network service that translates a website's name to its Internet address. This error is most often caused by having no connection to the Internet or a misconfigured network. It can also be caused by an unresponsive DNS server or a firewall preventing Google Chrome from accessing the network.
Well, apparently I just needed to take a break. After I finished off my lunch, it occurred to me that Chrome was giving me the answer all along - it was a DNS issue. I added an A record for www pointing to my IP address on Digital Ocean, problem solved.
I believe www is missing by default on DO servers provisioned by Laravel Forge.

nginx server sometime returns 404 Not Found for valid a URL

I am using a nginx server with PHP-FPM for web development. While sending continuous AJAX calls from the browser to the server, I sometimes get 404 Not Found errors for a valid URL. (When I open the URL in a new browser tab, it is showing the page properly.)
I am unable to debug why nginx is behaving like this. I don't know if it is dropping connections. What should I do?
I am using default installation of nginx and have not made any changes to it.
This is my nginx.conf
server {
listen IP address with PORT ssl;
server_name SERVER Name;
root /u01/projectfolder;
ssl on;
ssl_certificate /etc/nginx/ssl/36287365.net.cert;
ssl_certificate_key /etc/nginx/ssl/36287365.net.key;
index index.php index.html;
log_not_found off;
charset utf-8;
location /rainbow {
try_files $uri $uri/ /rainbow/index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass 127.0.0.1:9101;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
I have also added the log at php side index.php but for 404 requests logs are not generated. I guess the request is not reached to php way fastcgi and nginx.
Please help
This may happen if you have more than one server listening on the same port, make sure only one PHP server listens on 9101.
I've experienced this with two different Node.js servers running simultaneously, listening to the same port - about 50% of the requests returned HTTP error 404 while the rest did OK 200.
You can verify that it's not Nginx's fault by generating the same request using CURL directly on the machine, bypassing Nginx. If this still occurs, it's not their fault.

Multiple domains on one server points to wrong sites

Using Nginx, I've created a multiple domain setup for one server consisting of four individual sites. When I start Nginx I get an error message and the sites seem to get mixed up as typing in one url leads to one of the other sites.
The error message displayed -
Restarting nginx: nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
nginx.
I've set up all four domains in a similar manner in their respective file under /sites-available -
server {
listen 80;
root /var/www/example.com/public_html;
index index.php index.html index.htm;
server_name example.com www.example.com;
location / {
try_files $uri $uri/ /index.html;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
I've checked and there is no default file in /sites-enabled. Guessing there might be a faulty setting in the Nginx main config but not sure as to what to look for.
Your nginx.conf loads its external server files from the path you have in your include directives.
If you have a file in include /etc/nginx/conf.d/*.conf; and its symlinked to include /etc/nginx/sites-enabled it's going to load the file twice which would cause that error.
I was having the same problem with my Ubuntu/nginx/gunicorn/django 1.9 sites on my local machine. I had two nginx files in my /etc/nginx/sites-enabled. Removing either one allowed to remaining site to work. Putting both files in ended up always going to one of the two sites. I'm not sure how it chose.
So after looking at several stack overflow questions without finding a solution I went here: http://nginx.org/en/docs/http/request_processing.html
It ends up that you can have multiple servers in one sites-enabled file, so I changed to this:
server {
listen 80;
server_name joelgoldstick.com.local;
error_log /var/log/nginx/joelgoldstick.com.error.log debug;
location / {
proxy_pass http://127.0.0.1:8002;
}
location /static/ {
autoindex on;
alias /home/jcg/code/python/venvs/jg18/blog/collect_static/;
}
}
server {
listen 80;
server_name cc-baseballstats.info.local;
error_log /var/log/nginx/baseballstats.info.error.log debug;
location / {
proxy_pass http://127.0.0.1:8001;
}
location /static/ {
autoindex on;
alias /home/jcg/code/python/venvs/baseball/baseball_stats/collect_static/;
}
}
I can now access both of my sites locally
Check out the /etc/nginx/sites-enabled/ directory if there is any temp file such as ~default. Delete it and problem solved.
Credit: #OmarIthawi nginx error "conflicting server name" ignored
In my case no sites-enabled nor double includes ....
The solution was avoiding more than one reference (if you consider all of the conf.d files as a whole) to "listen 80" and "server_name" references ...
In my case, default.conf and kibana.conf both included references to this guys ... I commented the one in default and problem solved !
My 2 cents ....

Resources