Nginx is taking the whole domain and not the subdomain - nginx

I've installed Nginx on a fresh EC2 instance (Amazon Linux 2) with a basic config file:
server {
listen 80;
listen [::]:80;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name atlasalgorithms.kadiemqazzaz.com;
location / {
try_files $uri $uri/ =404;
}
}
Now Nginx is serving both http://atlasalgorithms.kadiemqazzaz.com and http://kadiemqazzaz.com but I want Nginx to serve only http://atlasalgorithms.kadiemqazzaz.com.
I declared only atlasalgorithms.kadiemqazzaz.com in the server_name so what am I missing?

The rule server_name atlasalgorithms.kadiemqazzaz.com; is actually only matching http://atlasalgorithms.kadiemqazzaz.com.
But there is the only server block in the conf file. This means that this also serves as the default server. Since, http://kadiemqazzaz.com matches none, request is routed to the default server block.
nginx tests only the request’s header field “Host” to determine which server the request should be routed to. If its value does not match any server name, or the request does not contain this header field at all, then nginx will route the request to the default server
Read more about nginx request routing here.
If you need a different routing for http://kadiemqazzaz.com, you should have another server block defining different rules.

Related

Better to proxy port 80 or redirect to server:port

Context is I have a webapp running on port 9001, but want to have app.company.com point to my application. I'm not sure if this matters, but I have 2 approaches I can take here.
Create a virtual host to proxy HTTP traffic to port 9001.
Create a redirect on the domain layer to server.company.com:9001
Which would be the better route, and why does it matter?
Your question is a little bit broad, but in general, if you are using a nginx, and it is likely that you application will have some static contents such as css, js, fonts, etc. and you would like the nginx to serve those static content, and only have your app(i.e. the one running at port 9001) to handle the dynamic content, with that in mind, what you could do is for nginx to listen at port 80, and pass the dynamic content to port 9001.
Example:
server {
listen 80 default_server;
listen [::]:80;
root /var/www/html;
server_name example.com;
location /static {
alias /var/www/html/static;
}
location / {
try_files $uri #backend;
}
location #backend {
proxy_pass http://server.company.com:9001;
# other configuration settings here
}
# other location configuration here
}

How can I hide a file from the browser, yet still use it on the webserver with NGINX?

Here's my scenario:
I have a vagrant cloud set up at an IAAS provider. It uses a .json file as its catalog to direct download requests from vagrant over to their corresponding .box files on the server.
My goal is to hide the .json file from the browser so that a surfer cannot hit it directly at, say: http://example.com/catalog.json and see the json output as that output lists the url of the box file itself. However, I still need vagrant to be able to download and use the file so it can grab the box.
In the NGINX docs, it mentions the "internal" directive which seems to offer what I want to do via try_files, but I think I'm either mis-interpreting what it does or just plain doing it wrong. Here's what I'm working with as an example:
First, I have two sub-domains.
One for the .json catalog at: catalog.example.com
A second for the box files at: boxes.example.com
These are mapped, of course, to respective folders on the server, etc.
With that in mind, in sites-available/site.conf, I have the following server blocks:
server {
listen 80;
listen [::]:80;
server_name catalog.example.com;
server_name www.catalog.example.com;
root /var/www/catalog;
# Use try_files to trigger internal directive to serve json files
location / {
try_files $uri =404;
}
# Serve json files to scripts only with content type header application/json
location ~ \.json$ {
internal;
add_header Content-Type application/json;
}
}
server {
listen 80;
listen [::]:80;
server_name boxes.example.com;
server_name www.boxes.example.com;
root /var/www/boxes;
# Use try_files to trigger internal directive to serve json files
location / {
try_files $uri =404;
}
# Serve box files to scripts only with content type application/octet-stream
location ~ \.box$ {
internal;
add_header Content-Type application/octet-stream;
}
}
The NGINX documentation for the internal directive states:
Specifies that a given location can only be used for internal requests. For external requests, the client error 404 (Not Found) is returned. Internal requests are the following:
requests redirected by the error_page, index, random_index, and try_files directives;
Based on that, my understanding is that my server blocks grab any path for those sub-domains and then, passing it through try_files, should make that available when called via vagrant, yet hide it from the browser if I hit the catalog or a box url directly.
I can confirm that the files are not accessible from the browser; however, they're also unaccessible to vagrant as well.
Am I mis-understanding internal here? Is there a way to achieve my goal?
Make sure for the sensitive calls the server listens on localhost only
Create a tunnel between the machine running vagrant (using an arbitrary port) and your IAAS provider machine (on the web server port, for example).
Create a user on your IAAS machine who is only allowed to interact with the forwarded web-server port (via sshd_config)
Use details from below
https://askubuntu.com/questions/48129/how-to-create-a-restricted-ssh-user-for-port-forwarding
Reference the tunneled server using http://:/path in both your catalog.json url and your box file url
Use a server block in your NGINX config which listens to the 127.0.0.1:80 only and doesn't use server_name. You can even add default_server to this so that anything that doesn't match other virtual host will hit this block
Use two locations in your config with different roots to serve files from /var/www/catalog and /var/www/boxes respectively.
Set regex locations for your .json and .box files and use a try_files block to accept the $uri or redirect to 444 (so you know it hit your block)
Deny the /boxes and /catalog otherwise.
See the below nginx config for example
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com;
server_name www.example.com;
root /var/www;
location ~ /(catalog|boxes) {
deny all;
return 403;
}
}
server {
listen 80;
listen [::]:80;
server_name store.example.com; # I will use an eCommerce platform eventually
root /var/www/store;
}
server {
listen 127.0.0.1:80;
listen [::]:80;
root /var/www;
location ~ \.json$ {
try_files $uri $uri/ =444;i
add_header Content-Type application/json;
}
location ~ \.box$ {
try_files $uri $uri/ =444;
add_header Content-Type octet/stream;
}
location ~ /(catalog|boxes) {
deny all;
return 403;
}
}
I think all you need here is to change the access level to the file. There is 3 access level (execute, read and write) you can remove the execute access level from your file. On the server consul run the command:
chmod 766 your_file_name
you can see:
here
and here
for more information.

How does nginx know my servers url adress?

I installed nginx using sudo apt-get install nginx.
Now this allows me to go to my_ip:port and it allows me to visit the website.
Yet, i can also do my_url:port and it will also direct me to the website.
How can nginx know my_url when I have not told it my_url anymore?
I was running Apache before, can that explain it?
Nginx was able to load via the fqdn my_url:port even though you haven't added my_url in the nginx config because config default_server (usually there by default) was specified.
default_server parameter specifies which block should serve a request if the server_name requested does not match any of the available server blocks:
For example
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
}
Nginx doesn't need it (at least, not yet). Your web browser looks up my_url in the DNS, and then uses my_ip (from DNS) :port (which you entered in your browser) to connect to Nginx.
Your Nginx is probably only configured with one site, which means any connection to it - regardless of whether it is by IP or by domain name - causes Nginx to serve that site. You can change this by going into your Nginx configuration files and setting (or changing) the value of the server_name parameter, for example:
server { # You already have a server block somewhere in the config file
listen 80; # Or 443, if you've enabled SSL
server_name example.com www.example.com; # Add (or change) this line to the list of addresses you want to answer to

nginx: how to redirect to https while still serving one directory via http?

I want to prevent people from using my site via http and force them to use a secure connection. My https certificate is issued by letsencrypt (via the webroot option), which means they connect via http where I serve the static content from /.well-known/acme-challenge/. All other requests should be redirected to use https. Following the relevant part of my nginx.conf
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com admin.example.com;
location /.well-known/acme-challenge {
root /app;
access_log on;
try_files $uri $uri/ =418;
}
return 301 https://$server_name$request_uri;
}
This https upgrade works fine and all users get an https connection as intended. The Problem is, that nginx upgrades EVERY request made, even those from letsencrypt, which cause letsencrypt to fail - it doesn't even try to serve the file (the file exists!).
How can I ensure that if a request comes via http to example.com/.well-known/acme-challenge/[HASH] it will serve the file if found or return a 418 error while simultaneously upgrading all other requests to https which dont start with /.well-known/acme-challenge? Thanks for any suggestions
The return 301 is in server scope, which is not what you want. Place the return inside a default location:
location / {
return 301 https://$server_name$request_uri;
}

When do we need to use http block in nginx config file?

I am reading nginx beginner's tutorial, on the section Serving Static Content they have
http {
server {
}
}
but when I add an http block I get the error
[emerg] "http" directive is not allowed here …
When I remove the http block and change the conf file to this, it works fine:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /var/example.com/html;
index index.html index.htm;
# make site accessible from http://localhost/
server_name localhost
location / {
try_files $uri $uri/ /index.html;
}
I suspect that I am missing something simple, but why do they use http to serve static files?
Your doing fine. I guess you are editing /etc/nginx/sites-enabled/default (or the linked file at /etc/nginx/sites-available/default.
This is the standard nginx set up. It is configured with /etc/nginx/nginx.conf which contains the http {} statement. This in turn contains an "include /etc/nginx/sites-enabled/*" line to include your file above with server{ } clause in it.
Note that if you are using an editor that creates a backup file, you must modify the include statement to exclude the backup files, or you will get some "interesting" errors! My line is
include /etc/nginx/sites-enabled/*[a-zA-Z]
which will not pick up backup files ending in a tilde. YMMV.

Resources