Trying to prevent direct IP access Nginx SSL - nginx

I've been searching for the answer here regarding the following code:
server {
listen 443 ssl;
server_name "host.domain.com";
location / {
proxy_pass http://host.domain.com;
}
ssl_certificate /etc/httpd/ssl/Sample_StarCert.crt;
ssl_certificate_key /etc/httpd/ssl/Sample-NPW.key;
}
This does what it's supposed to do for the most part by reverse-proxying SSL requests to an http server on the same machine for host.domain.com requests (by design).
The problem arises when I try to access the same site https://ipaddress
Now the browser accessing the site in this manner is confronted with a certificate warning. I need that certificate warning to go away under these circumstances and to either get that attempt blocked, or redirected to a FQDN request. Either is fine.
I've been trying to accomplish this with other server blocks, but the problem I keep running into is that SSL server blocks seem to really want to begin presenting the certificate to the browser before processing the request which is strange to me since I thought SNI (which this server is compiled with) is designed to recognize the request before presenting a certificate is necessary? It's very likely I just misunderstand how this all works and would welcome the guidance.
I tried conditional statements with no success as well. Any ideas?

I've often seen this attempt for port 80:
server {
listen 80;
server_name _;
return 444;
}
However, this won't work for SSL and cause your website from not being accessible when requesting through the domain.
I figured out, you can simply check if the http host is the server name in the primary server block for SSL, like so:
server {
listen 443 ssl;
server_name YOUR_DOMAIN;
index index.php index.html;
root /var/www/html/;
# block direct IP access
if ($http_host != $server_name) {
return 444;
}
}
This attempt also works for HTTP, if you prefer some consistency.

Related

Nginx choosing wrong server

I want to handle to handle sub.domain.com and domain.com with different server blocks.
So I created the following config:
server {
listen 443 ssl;
server_name sub.domain.com;
location / {
...
}
}
server {
listen 443 ssl;
server_name domain.com;
location / {
...
}
}
Requests to sub.domain.com get correctly handled tby the first server block. However requests to domain.com also get handled by the first one.
Why?
From what I understand from the docs, requests to domain.com shouldn't be matched by sub.domain.com?
Probably won't apply to all cases but for me I was using a .dev domain (which requires https). I hadn't set up the cert yet and I guess nginx was just using the first https enabled site it could find even though the server_name was not a match.
Once I enabled https using certbot and enabled the 443 redirect it started working.
I had a similar problem and this question was quite high up in the Google results.
In my case I had a development subdomain running http only, and the main domain running https. One of the redirects on the main domain caught the subdomain request and forwarded it to the main site.
The fix was to specify http:// in the web browser or to enable https for the development domain.

Block direct IP access with NGINX with site behind Cloudflare

I'm trying to block direct IP access with NGINX.
I added the following block
server {
listen 80 default_server;
server_name "";
return 444;
}
I have another server block
server {
listen 80;
server_name aaa.domain.com
...
}
The problem is that after adding the server block for refusing direct IP access, I can no longer access my website via aaa.domain.com
It seems the first server block is catching all requests.
Note, I'm using Cloudflare, and I wonder if it might be related? Perhaps NGINX detects the incoming request from Cloudflare as being of direct IP access and blocks it? If so, how could I solve this?
If it matters, the above server blocks are on different files located in sites-enabled dir.
Cloudflare give the list of IP and ranges. You deny all and you accept traffic from these IPs, and it will work -> https://serverfault.com/questions/601339/how-do-i-deny-all-requests-not-from-cloudflare

Nginx subdomain routing during DNS propagation when switching data center

I have three servers: One serving data through an API with IP address 11.111.11.11, one serving static resources with IP address 22.222.22.22 and one Nginx reverse proxy with IP address 33.333.33.33 routing data to the other two.
I have 3 type A DNS records for example.com, www.example.com and api.example.com all pointing to the Nginx server with IP address 33.333.33.33. It then redirects the requests either to itself or to one of the other two servers depending on the protocol and subdomain in the request. The Nginx server blocks look like this:
# Redirect static request to HTTPS with no subdomain
server {
listen 80 default_server;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}
# Redirect API request to HTTPS
server {
listen 80;
server_name api.example.com;
return 301 https://api.example.com$request_uri;
}
# Redirect API request to same without subdomain
server {
listen 443 ssl;
server_name www.example.com;
return 301 https://example.com$request_uri;
}
# Route to static server
server {
listen 443 ssl;
server_name example.com;
### SSL conf...
location / {
proxy_pass http://22.222.22.22:8080;
}
}
# Route to API server
server {
listen 443 ssl;
server_name api.example.com;
### SSL conf...
location / {
proxy_pass http://11.111.11.11:8080;
}
}
Pretty basic stuff.
Now I have been for forced to switch data center. I now have a copy of the exact same setup at the new data center, but the IP addresses are now instead 44.444.44.44, 55.555.55.55 and 66.666.66.66.
Now comes the tricky part. I now want to update the DNS records (which can take up to 24h) to all point to the Nginx server with IP address 66.666.66.66 in the new data center without downtime.
To enable this I'm thinking of changing the configuration of the old Nginx server so that it routes all traffic to the new Nginx server. However, this will (probably) not work, since the routing is done using the request subdomains, and that information is lost in the redirect. I am unable to route traffic from the old Nginx server to the new API and static servers, since the servers talk to each other over a private network. The traffic also has to keep its encryption when on the internet, and can only be sent decrypted within the same data center.
How can I do this data center switch, while having no down time during the DNS propagation?

nginx: Redirect request with server having a dynamic ip

I have a nginx server set up behind a router that gets a new ip-adress every day.
Now I want to redirect every user who accesses the site via
http://[IP]/
to
https://[IP]/
this works fine inside the local network, but if I access the site from outside it obviously doesn't.
Here's an extract from my nginx-configuration-file (/etc/nginx/sites-avaliable/ ...)
server_name $hostname;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key
# redirect to HTTPS
if ($scheme = http) {
return 301 https://$server_name$request_uri;
}
How can I make server_name change every day, when the public IP changes?
I can't use something like dyndns or noIP.
Is it even possible to change this during runtime without restarting nginx?

How to configure nginx to proxy another service serving http and https on different ports?

Use case:
Using nginx as a frontend for several websites / services running on both 80 and 443 (several virtual hosts).
Having service x running on localhost that serves http:8090 and https:8099
How do I need to configure nginx so people can access using only the name, without specifying the port.
This a fairly normal setup. Configure the hosts served directly on Nginx as normal. Since they need to listen on both 80 and 443, each host entry would have this in it:
server {
listen 80;
listen 443 ssl;
}
The Nginx SSL docs has the full details.
Then proxy traffic for one server{} definition to the backend service:
server {
server_name example.com;
location / { proxy_pass http://127.0.0.1:8090; }
}
You only need one proxy connection to the backend server, either 'http' or 'https'. If the connection between the two servers is secure, you can 'http', even for connections that arrive to nginx over https. This might be appropriate if the service is on the same machine. Otherwise, all the traffic could be proxied through https if the connection between nginx and the backend server needs to be secured.
We use the following with our host:
http {
server {
server_name ~^(www\.)?(?<domain>.+)$;
listen *:80;
location / {
proxy_pass $scheme://<origin>$uri$is_args$args;
include basic-proxy-settings.conf;
}
}
server {
server_name ~^(www\.)?(?<domain>.+)$;
listen *:443 ssl;
location / {
proxy_pass $scheme://<origin>$uri$is_args$args;
include basic-proxy-settings.conf;
}
include ssl-settings.conf;
}
}
This allows our upstream proxy to talk to our origin server over HTTP when a request is made by a client for an insecure resource, and over SSL/HTTPS when a request is made for a secure one. It also allows our origin servers to be in charge of forcing redirects to secure connections, etc.
Next time, why not provide a code sample detailing what you've tried, what has worked, and what hasn't?

Resources