Certbot having problems finding my ACME challenge on nodejs web application - nginx

I have a NodeJS web service which is exposed with a reverse-proxy using Nginx. I am trying to renew an SSL certificate from certbot, and for renewal it looks at domain.com/.well-known for the ACME challenge. However, the way I have the node service configured is that the root path does not serve files, the root of the domain is caught and handled by my web service. My actual public webroot is at domain.com/public, so the ACME challenge is really at domain.com/public/.well-known
So there are two ways to fix this, I could figure out how to tell certbot to look at domain.com/public/.well-known instead of domain.com/.well-known, or figure out how to somehow "proxy" domain.com/public/.well-known to domain.com/.well-known.
Here is my config and failed attempt at redirecting it:
server {
listen 80;
listen 443 ssl;
client_max_body_size 50M;
ssl_certificate <path to cert>;
ssl_certificate_key <path to key>;
server_name domain.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /.well-known {
return 302 "http://{$host}/public{$request_uri}";
}
}

If you cannot use path based (HTTP) domain validation, you can use DNS based domain validation.
certbot certonly --manual --preferred-challenges dns -d mydomain.com
This will prompt you to add a TXT record to your domain's DNS server. Add the record and then wait a few minutes before pressing ENTER to continue.
The copy the new certificates to your desired location.
Certbot User Guide

Related

How to proxy pass from url path to different subdomain on different dns server?

Let's say I have my main domain on one server and one of the subdomains to another server.
both of these addresses are using Cloudflare DNS to different ip addresses, so:
example.com => ip1
new.example.com => ip2
Now I want to proxy_pass a certain path on example.com to new.example.com without changing the url, so:
example.com/something should show content of new.example.com/somethingElse
These are my nginx config files, the problem is if I point example.com/something to google.com or even an ngrok server that I hosted for test, everything works just fine, but when I point it to new.example.com/something it gives me 502 error, so my guess is there's something wrong with my new.example.com config.
example.com Config:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl on;
ssl_certificate /etc/letsencrypt/live/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/key.pem;
server_name example.com www.example.com;
resolver 8.8.8.8;
location = /something {
proxy_set_header X-Forwarded-Host new.example.com;
proxy_set_header Host new.example.com;
proxy_pass https://new.example.com/somethingElse;
}
}
new.example.com Config:
server {
listen 443;
server_name www.new.example.com new.example.com;
ssl_certificate /etc/ssl/private/cert.pem;
ssl_certificate_key /etc/ssl/private/key.pem;
location / {
proxy_pass http://container-name:80;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Please test the connectivity between the servers. Login into example.com server and send CURL request to the new.example.com service.
Looks like example.com server is not able to reach new.example.com server.
Please check nginx service logs.
Another option to achieve your requirements is cloudflare worker service.

Howto block nginx web site access if browsers have no ssl certificate

I am a newb and i installed jupyterhub with nginx reverse proxy on my ubuntu 18.04 server. I built my own root CA and self signed certificate with openssl. Https connections works very well if my rootCA is installed on my others computers. I want to block access for the computers who don't have my rootCA.
the file /etc/nginx/nginx.conf is untouched and my config file /etc/nginx/sites-available/jupyter.conf is:
# top-level http config for websocket headers If Upgrade is defined,
# Connection = upgrade If Upgrade is empty, Connection = close
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# HTTP server to redirect all 80 traffic to SSL/HTTPS
server {
listen 80;
server_name 192.168.4.70 mlserver.net localhost;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
}
# HTTPS server to handle JupyterHub
server {
listen 443;
ssl on;
server_name 192.168.4.70 mlserver.net localhost;
ssl_certificate /etc/ssl/certs/mlserver.net.crt;
ssl_certificate_key /etc/ssl/private/mlserver.net.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
#ssl_stapling on;
# Managing literal requests to the JupyterHub front end
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# websocket headers
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Scheme $scheme;
proxy_buffering off;
}
}
How can i edit this file to block access for computers who dont have certificate ?
What nginx directive add ?
Thanx.
I want to block access for the computers who don't have my rootCA.
This is not possible. The server has no information if the client has successfully validated the server certificate (i.e. clients which have the rootCA) or if a client simply skipped certificate validation (clients which don't have rootCA).
One could try to add a HSTS header so that browsers will not simply allow to ignore certificate problems. But this can also be bypassed on the client side without the server noticing, it just makes it a bit harder.
If you want to control who can access the notebook you would need proper authentication of the clients instead. Knowledge of the rootCA is not client authentication.

how to redirect my domain to localhost: 3000 using ngnix

I'm new to all of this.
I'm going to put you in context. I bought a domain miweb.pe and an instance in aws. Currently my domain redirects to my aws instance because I have registered the dns servers of my amazon instance in myweb.pe.
I bought an ssl certificate and am trying to install it on my amazon instance, where I also installed nginx. I am unable to make any request to myweb.pe redirect to the aws instance that currently has a nodejs service active under port 3000.
this is my current configuration. What am I doing wrong?
server {
listen 443;
server_name myweb.pe;
ssl on;
ssl_certificate /etc/ssl/ssl-bundle.crt;
ssl_certificate_key /etc/ssl/beekey.key;
access_log /var/log/nginx/nginx.vhost.access.log;
error_log /var/log/nginx/nginx.vhost.error.log;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 80;
server_name www.myweb.pe;
return 301 https://myweb.pe$request_uri;
}
# Redirige de https://www.tudominio.com a https://tudominio.com
server {
listen 443;
server_name www.miweb.pe;
return 301 $scheme://myweb.pe$request_uri;
}
in summary, I want that when accessing myweb.pe it actually accesses thelocalhost: 3000 which is running on my amazon instance.
So, what is the issue you are facing, I can see one issue in your nginx rule for servername you need to type domain name and not localhost. The other thing is I am assuming your service on port 3000 should already be running.

Configure NextCloud & Nginx Reverse Port Forward - Login Authentication Error

I have set up an nginx reverse proxy server on my web server, which is receiving SSL traffic, and reverse proxying it to port 8080 on my web server, which is an exposed port running the nextcloud docker image. I am able to log in from a desktop web browser, but I am not able to log in from my iPhone. When I log in from the app, I receive error message "Access Forbidden, Invalid Request." This Github issue identifies the issue as auth headers being removed from the request, though the solution it gives is for Apache, not for Nginx. I'm really not familiar with authorization headers. How would I modify my Nginx server directive to take care of the issue?
Current setup
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name cloud.foo.com;
ssl_certificate /etc/letsencrypt/live/cloud.foo.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/cloud.foo.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
}
You may need to add a setting to explicitly pass the Authorization header in the response from the proxied server.
For example:
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass_header Authorization;
}
Based on the reverse proxy settings I've seen for another authenticated service, it's probable that by default, Nginx does not pass the Authorization header from the response of a proxied server to a client. Although this is not listed in the documentation, it is probably necessary to avoid interference with the authentication modules.

How to create Kubernetes cluster serving its own container with SSL and NGINX

I'm trying to build a Kubernetes cluster with following services inside:
Docker-registry (which will contain my django Docker image)
Nginx listenning both on port 80 and 443
PostgreSQL
Several django applications served with gunicorn
letsencrypt container to generate and automatically renew signed SSL certificates
My problem is a chicken and egg problem that occurs during the creation of the cluster:
My SSL certificates are stored in a secret volume that is generated by the letsencrypt container. To be able to generate the certificate, we need to show we are owner of the domain name and this is done by validating a file is accessible from the server name (basically this consist of Nginx being able to serve a staticfile over port 80)
So here occurs my first problem: To serve the static file needed by letsencrypt, I need to have nginx started. The SSL part of nginx can't be started if the secret hasn't been mounted and the secret is generated only when let's encrypt succeed...
So, a simple solution could be to have 2 Nginx containers: One listening only on port 80 that will be started first, then letsencrypt then we start a second Nginx container listening on port 443
-> This kind of look like a waste of resources in my opinion, but why not.
Now assuming I have 2 nginx containers, I want my Docker Registry to be accessible over https.
So in my nginx configuration, I'll have a docker-registry.conf file looking like:
upstream docker-registry {
server registry:5000;
}
server {
listen 443;
server_name docker.thedivernetwork.net;
# SSL
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting plus add_header
auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/registry.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
The important part is the proxy_pass that redirect toward the registry container.
The problem I'm facing is that my Django Gunicorn server also has its configuration file in the same folder django.conf:
upstream django {
server django:5000;
}
server {
listen 443 ssl;
server_name example.com;
charset utf-8;
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 20M;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_django;
}
location #proxy_to_django {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
#proxy_pass_header Server;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 65;
proxy_read_timeout 65;
proxy_pass http://django;
}
}
So nginx will successfully start only under 3 conditions:
secret is mounted (this could be addressed by splitting Nginx into 2 separate containers)
registry service is started
django service is started
The problem is that django image is pulling its image from the registry service, so we are in a dead-lock situation again.
I didn't mention it but both registry and django have different ServerName so nginx is able to both serve them
The solution I though about it (but it's quite dirty!) would be to reload nginx several time with more and more configurations:
I start docker registry service
I start Nginx with only the registry.conf
I create my django rc and service
I reload nginx with both registry.conf and django.conf
If there was a way to make nginx start ignoring failing configuration, that would probably solve my issues as well.
How can I cleanly achieve this setup?
Thanks for your help
Thibault
Are you using Kubernetes Services for your applications?
With a Service to each of your Pods, you have a proxy for the Pods. Even if the pod is not started, as long as the Service is started nginx will find it when looking it up as the Service has an IP assigned.
So you start the Services, then start nginx and whatever Pod you want in the order you want.

Resources