I found something curious about the nginx configuration.
Whenever I put a string-literal as certificate/key-file location everything works fine. But since I need a bit more dynamic logic I tried maps. But apparently when you include a variable into the file-string, nginx doesn't use its root privileges meaning it cannot open the certificate?
example (this works perfectly):
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
server_name _;
ssl_certificate /etc/letsencrypt/live/mydomain.com/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
# rest of the configuration stuff
}
example (this doesn't somehow):
map $host $cert_name {
mydomain.com mydomain.com;
www.mydomain.com mydomain.com;
default mydomain.com;
}
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
server_name _;
ssl_certificate /etc/letsencrypt/live/$cert_name/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/$cert_name/privkey.pem;
# rest of the configuration stuff
}
This wil yield in the following error message:
cannot load certificate "/etc/letsencrypt/live/mydomain.com/cert.pem": BIO_new_file() failed (SSL: error:0200100D:system library:fopen:Permission denied:fopen('/etc/letsencrypt/live/mydomain.com/cert.pem','r') error:2006D002:BIO routines:BIO_new_file:system lib) while SSL handshaking, client: #client-ip#, server: 0.0.0.0:443
additional info:
OS: Ubuntu 20.04
nginx: nginx/1.18.0 (Ubuntu) built with OpenSSL 1.1.1f 31 Mar 2020 TLS SNI support enabled
nginx started as root user?: yes
letsencrypt folder permissions: default
Would this be a security feature? or a bug?
In any case, is there a work around for this?
For if anyone stumbles on this problem.
I found the answer in the letsencrypt documentation.
Basically the /etc/letsencrypt/live and /etc/letsencrypt/archive directories are created with permission 0700 for legacy reasons. When using the latest version of cerbot it is safe to set the permissions to 0755. The command they provided is:
chmod 0755 /etc/letsencrypt/{live,archive}
This allows non-root users to access all public certificates, while the private-keys are still only accessible by the root user/super-users. As it turns out, this is exactually what nginx does which means everything should work after changing the permissions.
Related
I’m trying to deploy different websites on the same server that all listen on 0.0.0.0:443 with HTTP/2. My Ansible template looks like this:
# This is deployed in /etc/nginx/sites-available/{{ domain }}.conf and symlinked in sites-enabled
server {
listen 443 ssl http2 reuseport;
listen [::]:443 ssl http2 reuseport;
server_name {{ domain }};
...
}
This doesn’t work because according to the nginx doc (emphasis mine):
The listen directive can have several additional parameters specific to socket-related system calls. These parameters can be specified in any listen directive, but only once for a given address:port pair.
If I were deploying these websites by hand I would use listen 443 ssl http2 reuseport; in the first one and then listen 443; in the subsequent ones. But I’m trying to simplify the setup by having a single Ansible template that I can use for any website.
It looks cumbersome to check if there’s already a deployed website with these options and don’t include them if that’s the case. Also, if I remove the website with these options, it breaks all the others.
Is there any way I could add a unique file somewhere under /etc/nginx that says "use these parameters on 0.0.0.0:443" for all listen directives? I could probably add a dummy server{} block somewhere than listens for a unexisting server name but I wonder if there’s a proper way to do that.
This is the solution I have for now:
# /etc/nginx/sites-enabled/http2 -> /etc/nginx/sites-available/http2
server {
listen 443 ssl http2 reuseport;
listen [::]:443 ssl http2 reuseport;
server_name --;
}
-- is a dummy server name that doesn’t match anything. This block serves only as a common place for the listen parameters.
Now I just deploy websites with listen 443; and no other parameter, and they all share that common config.
I purchased SSL certificate from a certain hoster and I got these 4 files
> SSL Certificate:
> CSR:
> Private Key:
> CA Certificate:
How can I install those files into my VPS server using Nginx? My hoster is not collaborative, and I have to figure out how to install this to my client site. All googling leads me to the normal installation where we generate CSR from my VPS, submit to hoster, get certificates, merge and then fix them with Nginx, but for this case, I'm totally confused
Any help, I will appreciate
Typically if you receive your server certificate and private key you would place those in a directory on your web server then target those files inside of your config file. This can be a full path or a path relative to the .conf file where this server is configured.
Configure HTTPS server
server {
listen 443 ssl;
server_name www.example.com;
ssl_certificate /path/to/your/file.crt;
ssl_certificate_key /path/to/your/file.key;
}
Depending on your certificate provider they should have also given you a file that contains the CA chains necessary for browsers to validate your certificate. I'm guessing the file you call CA Certificate is that file. You will need to combine both your server certificate and the CA certificate, then target that file inside of your nginx conifiguration.
Example:
cat SSL_Certificate CA_Certificate > website.crt
server {
listen 443 ssl;
server_name www.example.com;
ssl_certificate /path/to/your/website.crt;
ssl_certificate_key /path/to/your/website.key;
}
I have a debian server with MySQL and Meilisearch, I want to use Nginx as a reverse proxy for future load balancing and also having TLS security.
I'm following Meilisearch's Documentation to set up Nginx with Meilisearch and Lets Encrypt with success, but they force Nginx to proxy everything to port 7700, I want to proxy to 3306 for MySQL, 7700 for Meilisearch, and 80 for an error page or a fallback web server. But after modifying /etc/nginx/nginx.conf, the website reloads too many times.
This is the configuration I'm trying at /etc/nginx/nginx.conf:
user www-data;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
stream {
upstream mysql {
server 127.0.0.1:3306;
}
upstream meilisearch {
server 127.0.0.1:7700;
}
server {
listen 6666;
proxy_pass mysql;
}
server {
listen 9999;
proxy_pass meilisearch;
}
}
http {
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com;
return 301 https://\$server_name$request_uri;
}
server {
server_name example.com;
location / {
proxy_pass http://127.0.0.1:80;
}
listen [::]:443 ssl ipv6only=on;
listen 443 ssl;
# ssl_certificate managed by Certbot
# ssl_certificate_key managed by Certbot
}
}
The only difference is example.com is replaced by my domain, which has been already set to redirect to the server ip.
As ippi pointed out, it isn't necessary to reverse proxy MySQL in this particular case.
I used proxy_pass http://127.0.0.1:7700 as Meilisearch docs suggest.
For future DB load balancing, I'd use MySQL Clusters, and point out to them on another Nginx implementation that proxies everything (ie. HTTPS to web server, DB access to list of clusters, etc.).
Also, in this particular case, I don't actually require encrypted connections to the DB, but if I needed them, I'd use a self signed certificate in MySQL, and a CA certificate for the website, since my front ends communicate with a "central" Nginx proxy that could encrypt communication between backend servers and the proxy.
If I actually wanted to use the Let's Encrypt certificates for MySQL, I've found a good
recipe, but instead of copy pasting the certificate (which is painful) I'd mount the Let's Encrypt certificates' directory into MySQL's and change the permissions to 600 with chown. But then again, this is probably bad practice and would be better to use self signed certificates for MySQL as it's own docs suggest.
I installed nginx using sudo apt-get install nginx.
Now this allows me to go to my_ip:port and it allows me to visit the website.
Yet, i can also do my_url:port and it will also direct me to the website.
How can nginx know my_url when I have not told it my_url anymore?
I was running Apache before, can that explain it?
Nginx was able to load via the fqdn my_url:port even though you haven't added my_url in the nginx config because config default_server (usually there by default) was specified.
default_server parameter specifies which block should serve a request if the server_name requested does not match any of the available server blocks:
For example
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
}
Nginx doesn't need it (at least, not yet). Your web browser looks up my_url in the DNS, and then uses my_ip (from DNS) :port (which you entered in your browser) to connect to Nginx.
Your Nginx is probably only configured with one site, which means any connection to it - regardless of whether it is by IP or by domain name - causes Nginx to serve that site. You can change this by going into your Nginx configuration files and setting (or changing) the value of the server_name parameter, for example:
server { # You already have a server block somewhere in the config file
listen 80; # Or 443, if you've enabled SSL
server_name example.com www.example.com; # Add (or change) this line to the list of addresses you want to answer to
# nginx -v
nginx version: nginx/1.2.1
I've tried everything I can findI cannot get the http://www.mysite.com to direct to https://mysite.com.
What I have right now will redirect http://mysite.com to https://mysite.com.
http://www.mysite.com does not work at all. It returns a Oops! Google Chrome could not find www.mysite.com Here is my current half working configuration:
vim /etc/nginx/sites-available/default
server {
listen 80;
server_name www.mysite.com;
return 301 $scheme://mysite.com$request_uri;
}
server {
listen 443;
allow all;
root /home/jacob/mysite;
server_name mysite.com;
ssl on;
ssl_certificate /etc/nginx/ssl/mysite_com.pem;
ssl_certificate_key /etc/nginx/ssl/server.key;
...
No matter what I try, the non www will always work and the www will not work at all. I am not sure if I need to reset something else. Every time I change the config file I restart the nginx server.
You need to set the www. subdomain to point at your server in DNS.