nginx - two subdomain configuration - nginx

I'm new to Nginx and I'm trying to get subdomains working.
What I would like to do is take my domain (let's call it example.com) and add:
sub1.example.com,
sub2.example.com, and also have
www.example.com available.
I know how to do this with Apache, but Nginx is being a real head scratcher.
I'm running Debian 6.
My current /etc/nginx/sites-enabled/example.com:
server {
server_name www.example.com example.com;
access_log /srv/www/www.example.com/logs/access.log;
error_log /srv/www/www.example.com/logs/error.log;
root /srv/www/www.example.com/public_html;
location / {
index index.html index.htm;
}
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /srv/www/www.example.com/public_html$fastcgi_script_name;
}
}
It is working to serve example.com and www.example.com.
I have tried to add a second server block in the same file like:
server {
server_name www.example.com example.com;
access_log /srv/www/www.example.com/logs/access.log;
error_log /srv/www/www.example.com/logs/error.log;
root /srv/www/www.example.com/public_html;
server {
server_name sub1.example.com;
access_log /srv/www/example.com/logs/sub1-access.log;
error_log /srv/www/example.com/logs/sub1-error.log;
root /srv/www/example.com/sub1;
}
location / {
index index.html index.htm;
}
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /srv/www/www.example.com/public_html$fastcgi_script_name;
}
}
No luck. Any ideas? I'd super appreciate any feedback.

The mistake is putting a server block inside a server block, you should close the main server block then open a new one for the sub domains
server {
server_name example.com;
# the rest of the config
}
server {
server_name sub1.example.com;
# sub1 config
}
server {
server_name sub2.example.com;
# sub2 config
}

You just need to add the following line in place of your server_name
server_name xyz.com *.xyz.com;
And restart Nginx. That's it.

Add A field for each in DNS provider with sub1.example.com and sub2.example.com
Set the servers. Keep example.com at last
As below
server {
server_name sub1.example.com;
# sub1 config
}
server {
server_name sub2.example.com;
# sub2 config
}
server {
server_name example.com;
# the rest of the config
}
Restart Nginx
sudo systemctrl restart nginx

You'll have to create another nginx config file with a serverblock for your subdomain. Like so:
/etc/nginx/sites-enabled/subdomain.example.com

There is a very customizable solution, depending on your server implementation:
Two (or more) SUBdomains in a single nginx "sites" file? Good if you own a wildcard TLS certificate, thus want to maintain ONE nginx config file. All using same services BUT different ports? (Think of different app versions running concurrently, each listening locally to differents ports)
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name ~^(?<sub>.+).example.com;
# default app (the "default" ports, good for the "old" app)
set $app 19069;
set $app-chat 19072;
# new app
# new.example.com
if ( $sub = "new" ) {
set $app 18069;
set $app-chat 18072;
}
# upstreaming
location / {
proxy_redirect off;
proxy_pass http://127.0.0.1:$app;
}
location /longpolling {
proxy_pass http://127.0.0.1:$app-chat;
}
I know the performance will "suck", but then again, since the decision was to go with one server for all it's like complaining that an econobox cannot haul as much people as a bus because the little car has a "heavy" roof rack on top of it.
A regex expert could potentially improve the performance for such a custom solution, specially since it could ommit the CPU expensive "if" conditional statements.

Maybe this could help someone having a challenge with this, this got me grinding the whole day.
First, if you have SSL installed, remove it (delete better still), this would help reset the previous configuration that's disrupting the subdomain configuration.
in etc/nginx/.../default file
create a new server block
server {
listen 80;
server_name subdomain.domain.com;
location /{
proxy_pass http://localhost:$port;
}
}

Related

Nginx and relative paths for proxied subdomain

I'm a newbie in Nginx, trying to learn.
I have the server under mydomain.com and my static site under my-app.mydomain.com
All the paths are relative, so images/image.png resolves to my-app.mydomain.com/images/image.png.
I also have a second app, new-app.mydomain.com which has the same issue, the relative paths are trying to be resolved to mydomain.com
I don't know how to fix this and I would like to avoid having to make all paths absolute. Also, I would like a solution that allows me to keep adding new locations blocks for the new app and load the resources. I want to avoid some restrictive that could work for the main app but not for the other.
location /new-app {
proxy_ssl_server_name on;
proxy_pass "mydomain.com";
}
I will appreciate help.
Locations inside a server can help you configure website/content that need to be displayed on sub-routes of website. Sub-domains need to be configured in a seperate nginx file similar to main domain in which you can add as many location as per your requirement.
Nginx files:-
mydomain.com
server{
listen 80 default_server;
listen [::]:80 default_server;
server_name mydomain.com;
location / {
proxy_ssl_server_name on;
proxy_pass "mydomain.com"; # This should be server with port on which your server is running on VM(virtual machine)
root /var/www/html/mydomain.com; #In case you want to server static files
try_files $uri $uri/ /index.html;
}
}
new-app.mydomain.com
server{
listen 80 default_server;
listen [::]:80 default_server;
server_name new-app.mydomain.com;
location / {
proxy_ssl_server_name on;
proxy_pass "new-app.mydomain.com"; # This should be server with port on which your server is running on VM(virtual machine)
root /var/www/html/new-app.mydomain.com; #In case you want to server static files
try_files $uri $uri/ /index.html;
}
}

Running two apps on a single server name with different ports on NGINX

Good day SO.
I don't know if I used the right terminology on my question but I will try to explain what my concern is and what I am trying to achieve.
I have an app that runs on the NGINX server with SSL (port 443). on this app, my domainName/chatroom page is saved. i can open this no problem but there is no connection. To connect the chat room, there is another (app?) that is on a different root folder and can be run by NPM server with port 3000. With these setup, the app works, Connection and all. My question is how can I make the app, instead of running on NPM server every time to run on NGINX server instead.
here is my NGINX config file:
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /path/to/cert/mydomain.pem;
ssl_certificate_key /path/to/key/mydomain-key.pem;
server_name mydomain;
root /path/to/project/mydomain;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /app_dev.php$is_args$args;
}
location ~ ^/(app_dev|config)\.php(/|$) {
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
}
location ~ \.php$ {
return 404;
}
error_log /var/log/nginx/project_error.log;
access_log /var/log/nginx/project_access.log;
}
server {
listen 80;
listen [::]:80;
server_name mydomain;
return 301 https://$server_name$request_uri;
}
root folders:
/path/to/project/mydomain
/path/to/project/mydomain-chat
I tried to create a new server block with the same server_name with port 3000 but did not work.

NGINX - Proxy the Request from VPS to Root Server

I would like to use my VPS as a proxy. That means if I call the VPS IP 123.123.123.123 the VPS should forward the request to my proper dedicated server (for example 444.444.444.444) without seeing the Root Server IP in the browser.
I had the code a long time ago - but I do not have the file anymore. Could someone help me which lines I have to insert?
server {
listen 123.123.123.123:80;
server_name example.com;
rewrite ^(.*) http://www.example.com$1 permanent;
}
server {
listen 123.123.123.123:80;
server_name www.example.com;
location ~ \.php$
{
include snippets/fastcgi-php.conf;
fastcgi_pass 127.0.0.1:9000;
}
}

Forge / Nginx DO WWW to non-WWW redirect issue

I just transferred a site to a DO server provisioned by Forge. I installed an SSL certificate and noticed that navigating to https://www.example.com results in a Server Not Found error, while http://example.com returns 200. I attempted to force non-WWW in the Nginx config file but cannot seem to make anything work. I also restarted Nginx after every attempt.
Here is my current Nginx config file:
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
server_name .example.com;
root /home/forge/default/current/public;
# FORGE SSL (DO NOT REMOVE!)
ssl_certificate /etc/nginx/ssl/default/56210/server.crt;
ssl_certificate_key /etc/nginx/ssl/default/56210/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/default-error.log error;
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
The server was set up with the default site - default, rather than example.com. I realized this after launching the site to production and installing the SSL cert, I am trying to avoid any downtime by trying to change this after the fact. I am not sure if the site being called default makes any difference here, but it's key to note.
So, https:// or http://example.com works fine. www.example.com returns a Server Not Found error on all browsers I've tested. I also noticed that there is a www.default file in /etc/nginx/sites-enabled, I tried changing it to the following and restarting nginx:
server {
listen 80;
server_name www.example.com;
return 301 $scheme://example.com/$request_uri;
}
Still receiving Server Not Found no matter what. Here is the error on Chrome:
The server at www.example.com can't be found, because the DNS lookup failed. DNS is the network service that translates a website's name to its Internet address. This error is most often caused by having no connection to the Internet or a misconfigured network. It can also be caused by an unresponsive DNS server or a firewall preventing Google Chrome from accessing the network.
Well, apparently I just needed to take a break. After I finished off my lunch, it occurred to me that Chrome was giving me the answer all along - it was a DNS issue. I added an A record for www pointing to my IP address on Digital Ocean, problem solved.
I believe www is missing by default on DO servers provisioned by Laravel Forge.

Nginx redirecting to wrong vhost

I have around 1300vhosts in one nginx conf file. All with the following layout (they are listed after each other in the vhost file).
Now my problem is that sometimes my browser redirects site2 to site1. For some reason, while the domain names don't event match.
It looks like nginx is always redirecting to the first site in the vhosts file.
Somebody that know what this problem can be?
server {
listen 80;
server_name site1.com;
rewrite ^(.*) http://www.site1.com$1 permanent;
}
server {
listen 80;
root /srv/www/site/public_html/src/public/;
error_log /srv/www/site/logs/error.log;
index index.php;
server_name www.site1.com;
location / {
if (!-e $request_filename) {
rewrite ^.*$ /index.php last;
}
}
location ~ .(php|phtml)$ {
try_files $uri $uri/ /index.php;
fastcgi_param SCRIPT_FILENAME /srv/www/site/public_html/src/public$fastcgi_script_name;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
}
}
server {
listen 80;
server_name site2.com;
rewrite ^(.*) http://www.site2.com$1 permanent;
}
server {
listen 80;
root /srv/www/site/public_html/src/public/;
error_log /srv/www/site/logs/error.log;
index index.php;
server_name www.site2.com;
location / {
if (!-e $request_filename) {
rewrite ^.*$ /index.php last;
}
}
location ~ .(php|phtml)$ {
try_files $uri $uri/ /index.php;
fastcgi_param SCRIPT_FILENAME /srv/www/site/public_html/src/public$fastcgi_script_name;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
}
}
EDIT
Maybe another thing to mention is that, I reload all this vhosts every 2 minutes with nginx -s reload.
On the first tests it looks like the redirection only happens when reloading... Going to do some more tests, but this could be helpful..
Reference (how nginx handles request): http://nginx.org/en/docs/http/request_processing.html
In this configuration nginx tests only the request’s header field
“Host” to determine which server the request should be routed to. If
its value does not match any server name, or the request does not
contain this header field at all, then nginx will route the request to
the default server for this port.
the default server is the first one — which is nginx’s standard
default behaviour
Could you check the host header of those bad requests?
Also you can create an explicit default server to catch all of these bad requests, and just log the request info (i.e, $http_host) into a different error log file for investigation.
server {
listen 80 default_server;
server_name _;
error_log /path/to/the/default_server_error.log;
return 444;
}
[UPDATE] As you are doing nginx -s reload and you have so many domains in that nginx conf file, the following is possible:
A reload works like this
starting new worker processes with a new configuration, graceful shutdown of old worker processes
So old workers and new workers could co-exist for a while. For example, when you add a new server block (with new domain name) to your config file, during the reloading time, the new workers will have the new domain and the old one will not. When the request happens to be sent by the old worker process, it will be treated as unknown host and served by the default server.
You said that it's done every 2 minutes. Could you run
ps aux |grep nginx
and check how long each worker is running? If it's much more than 2 minutes, the reloading may not work as you expected.

Resources