This might be a simple error but I can't seem to use certbot to verify my domain. I am using nginx that is connected to an express application. I have commented out the configurations from the default nginx file and it only includes the configurations for my site from /etc/nginx/conf.d/mysite.info. In my configuration, the first location entry points to the root /.well-known/acme-challenge directory. Here's the settings from my nginx conf file:
server {
listen 80;
server_name <MYDOMAIN>.info www.<MYDOMAIN>.info;
location '/.well-known/acme-challenge' {
root /srv/www/<MY_ROOT_DIRECTORY>;
}
location / {
proxy_pass http://localhost:4200;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /secure {
auth_pam "Secure zone";
auth_pam_service_name "nginx";
}
}
To verfiy, I used the following certbot command:
certbot certonly --agree-tos --email <My_EMAIL>#gmail.com --webroot -w /srv/www/<ROOT_FOLDER>/ -d <DOMAIN>.info
The error for certbot are as follows:
Performing the following challenges:
http-01 challenge for <MYDOMAIN>.info
Using the webroot path /srv/www/<ROOT_FOLDER> for all unmatched domains.
Waiting for verification...
Challenge failed for domain <MYDOMAIN>.info
http-01 challenge for <MYDOMAIN>.info
Cleaning up challenges
Some challenges have failed.
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: <MYDOMAIN>.info
Type: unauthorized
Detail: Invalid response from
http://<MYDOMAIN>.info/.well-known/acme-challenge/Yb3c1WtCn5G43YatrhVorTbT_nn3WKTLwKjr0c9dW8E
[74.208.<...>.<...>]: "<!DOCTYPE html>\n<html
lang=\"en\">\n<head>\n<meta
charset=\"utf-8\">\n<title>Error</title>\n</head>\n<body>\n<pre>Cannot
GET /.well-known/"
I am literally clueless at this point. All the directories and files have read permission for all users and groups. Any suggestions will be highly appreciated.
EDIT
Since Nginx was failing to deliver the challenge files, I modified my express server to send the files. The express app is accessible and it was easy to send the challenge files to get certbot to work. Although not the desired solution it worked. However, I will keep the post open for a better answer.
About:
Challenge failed for domain
This error can happen if you does not have the port 443 opened in your firewall.
I have the same problem trying to make the certbot works on AWS. After some tries, I just needed to open the port 443 in the Security Group associated with the EC2 instance.
I was facing this issue, but my problem was little bit different, after doing some research i got to know that the domain on which i was trying certbot is protected by cloudflare , and there is a waf rule for country restriction, which was blocking all the traffic from the origin server, so turning off the country restriction for a while did the job.
Related
I'm experiencing a strange issue when requesting Let's Encrypt certificates for a Next.js app hosted with Nginx reverse proxy.
I personally have a dedicated Nginx reverse proxy config template for configuring subdomains, which I used to configure web servers for multiple apps and subdomains:
server {
server_name subdomain.example.com;
location / {
proxy_pass http://localhost:8888;
include /etc/nginx/proxy_params;
}
access_log /var/log/nginx/example-subdomain.log;
error_log /var/log/nginx/example-subdomain-error.log crit;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
listen 80;
}
However, when running sudo certbot --nginx -d subdomain.example.com, certbot fails with the following log. Note that this is the very first time I've encountered such an error, especially as both config and command were working successfully in my previous attempts.
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Requesting a certificate for subdomain.example.com
Certbot failed to authenticate some domains (authenticator: nginx). The Certificate Authority reported these problems:
Domain: quantumteknologi.com
Type: unauthorized
Detail: 2400:6180:0:d0::15f6:1001: Invalid response from http://subdomain.example.com/.well-known/acme-challenge/hYZEXfMlhq-UDKblyOM2kXk_y-bbNJ5NOzTQly1AXeo: 404
Hint: The Certificate Authority failed to verify the temporary nginx configuration changes made by Certbot. Ensure the listed domains point to this nginx server and that it is accessible from the internet.
Some challenges have failed.
Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.
After further investigation the requested http://subdomain.example.com/.well-known/acme-challenge/hYZEXfMlhq-UDKblyOM2kXk_y-bbNJ5NOzTQly1AXeo is indeed not found, as Nginx simply passes the request into the Next.js application I built, instead of intercepting it and return it with proper response to continue my Let's Encrypt application.
Is there something wrong with my Nginx config? Or are there any missing steps which I've followed?
I am currently working on a FPV robotics project that has two servers, flask/werkzeug and streamserver, serving http traffic and streaming video to an external web server, located on a different machine.
The way it is currently configured is like this:
http://1.2.3.4:5000 is the "web" traffic (command and control) served by flask/werkzeug
http://1.2.3.4:5001 is the streaming video channel served by streamserver.
I want to place them behind a https reverse proxy so that I can connect to this via https://example.com where "example.com" is set to 1.2.3.4 in my external system's hosts file.
I would like to:
Pass traffic to the internal connection at 1.2.3.4:5000 through as a secure connection. (certain services, like the gamepad, won't work unless it's a secure connection.)
Pass traffic to 1.2.3.4:5001 as a plain-text connection on the inside as "streamserver" does not support HTTPS connections.
. . . such that the "external" connection (to ports 5000 and 5001 are both secure connections as far as the outside world is concerned, such that:
[external system]-https://example.com:5000/5001----nginx----https://example.com:5000
\---http://example.com:5001
http://example.com:5000 or 5001 redirects to https.
All of the literature I have seen so far talks about:
Routing/load-balancing to different physical servers.
Doing everything within a Kubernates and/or Docker container.
My application is just an every-day plain vanilla server type configuration, and the only reason I am even messing with https is because of the really annoying problems with things not working except in a secure context which prevents me from completing my project.
I am sure this is possible, but the literature is either hideously confusing or appears to talk to a different use case.
A reference to a simple how-to would be the most usefull choice.
Clear and unambiguous steps would also be appreciated.
Thanks in advance for any help you can provide.
This minimal config should provide public endpoints:
http://example.com/* => https://example.com/*
https://example.com/stream => http://1.2.3.4:5001/
https://example.com/* => https://1.2.3.4:5000/
# redirect to HTTPS
server {
listen 80;
listen [::]:80;
server_name example.com
www.example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com
www.example.com;
ssl_certificate /etc/nginx/ssl/server.cer;
ssl_certificate_key /etc/nginx/ssl/server.key;
location /stream {
proxy_pass http://1.2.3.4:5001/; # HTTP
}
# fallback location
location / {
proxy_pass https://1.2.3.4:5000/; # HTTPS
}
}
First, credit where credit is due: #AnthumChris's answer is essentially correct. However, if you've never done this before, the following additional information may be useful:
There is actually too much information online, most of which is contradictory, possibly wrong, and unnecessarily complicated.
It is not necessary to edit the nginx.conf file. In fact, that's probably a bad idea.
The current open-source version of nginx can be used as a reverse proxy, despite the comments on the nginx web-site saying you need the Pro version. As of this instant date, the current version for the Raspberry Pi is 1.14.
After sorting through the reams of information, I discovered that setting up a reverse proxy to multiple backend devices/server instances is remarkably simple. Much simpler than the on-line documentation would lead you to believe.
Installing nginx:
When you install nginx for the first time, it will report that the installation has failed. This is a bogus warning. You get this warning because the installation process tries to start the nginx service(s) and there isn't a valid configuration yet - so the startup of the services fails, however the installation is (likey) correct and proper.
Configuring the systems using nginx and connecting to it:
Note: This is a special case unique to my use-case as this is running on a stand-alone robot for development purposes and my domain is not a "live" domain on a web-facing server. It is a "real" domain with a "real" and trusted certificate to avoid browser warnings while development progresses.
It was necessary for me to make entries in the robot's and remote system's HOSTS file to automagically redirect references to my domain to the correct device, (the robot's fixed IP address), instead of directnic's servers where the domain is parked.
Configuring nginx:
The correct place to put your configuration file, (on the raspberry pi), is /etc/nginx/sites-available and create a symlink to that file in /etc/nginx/sites-enabled
It does not matter what you name it as nginx.conf blindly imports whatever is in that directory. The other side of that is if there is anything already in that directory, you should remove it or rename it with a leading dot.
nginx -T is your friend! You can use this to "test" your configuration for problems before you try to start it.
sudo systemctl restart nginx will attempt to restart nginx, (which as you begin configuration, will likely fail.)
sudo systemctl status nginx.service > ./[path]/log.txt 2>&1 is also your friend. This allows you to collect error messages at runtime that will prevent the service from starting. In my case, the majority of the problems were caused by other services using ports I had selected, or silly mis-configurations.
Once you have nginx started, and the status returns no problems, try sudo netstat -tulpn | grep nginx to make sure it's listening on the correct ports.
Troubleshooting nginx after you have it running:
Most browsers, (Firefox and Chrome at least) support a "developer mode" that you enter by pressing F-12. The console messages can be very helpful.
SSL certificates:
Unlike other SSL servers, nginx requires the site certificate to be combined with the intermediate certificate bundle received from the certificate authority by using cat mycert.crt bundle.file > combined.crt to create it.
Ultimately I ended up with the following configuration file:
Note that I commented out the HTTP redirect as there was a service using port 80 on my device. Under normal conditions, you will want to automatically re-direct port 80 to the secure connection.
Also note that I did not use hard-coded IP addresses in the config file. This allows you to reconfigure the target IP address if necessary.
A corollary to that is - if you're redirecting to an internal secure device configured with the same certificates, you have to pass it through as the domain instead of the IP address, otherwise the secure connection will fail.
#server {
# listen example.com:80;
# server_name example.com;
# return 301 https://example.com$request_uri;
# }
# This is the "web" server (command and control), running Flask/Werkzeug
# that must be passed through as a secure connection so that the
# joystick/gamepad works.
#
# Note that the internal Flask server must be configured to use a
# secure connection too. (Actually, that may not be true, but that's
# how I set it up. . .)
#
server {
listen example.com:443 ssl;
server_name example.com;
ssl_certificate /usr/local/share/ca-certificates/extra/combined.crt;
ssl_certificate_key /usr/local/share/ca-certificates/extra/example.com.key;
ssl_prefer_server_ciphers on;
location / {
proxy_pass https://example.com:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# This is the video streaming port/server running streamserver
# which is not, and cannot be, secured. However, since most
# modern browsers will not mix insecure and secure content on
# the same page, the outward facing connection must be secure.
#
server {
listen example.com:5001 ssl;
server_name example.com;
ssl_certificate /usr/local/share/ca-certificates/extra/combined.crt;
ssl_certificate_key /usr/local/share/ca-certificates/extra/www.example.com.key;
ssl_prefer_server_ciphers on;
# After securing the outward facing connection, pass it through
# as an insecure connection so streamserver doesn't barf.
location / {
proxy_pass http://example.com:5002;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Hopefully this will help the next person who encounters this problem.
I have looked across several forums and StackOverflow questions but I'm truly at a loss here as to why this is not working.
I was running a Ghost.org blog on a Digital Ocean droplet and had configured it according to this tutorial. I took a snapshot and destroyed the droplet a week ago. Everything is working fine at this moment.
Today, I created a fresh droplet with the same snapshot. Since the IP was different, I modified the domain settings accordingly and it was reflected on my computer as well.
Then I try to access the website but the connection times out. Even after an hour or so, it times out, so I figures it's not DNS propogation at fault. A simple check on whatsmydns.net also confirms this is not the issue.
Upon further investigation, I find that /var/log/nginx/error.log has the following line in it - the only one for today's date:
2016/05/29 16:32:10 [warn] 988#0: conflicting server name "foobar.com.tld" on 0.0.0.0:80, ignored
I have checked the two configs I know could conflict - nginx.conf and sites-enabled/ghost (symlink to sites-available/ghost) for any conflicts - and I can't seem to find any.
I am not very comfortable with nginx - this is really my first exposure to it, but I've been banging my head for over 2.5 hours now and would really appreciate some help.
nginx.conf: http://pastebin.com/YUhBZVhX
sites-enabled/ghost: http://pastebin.com/3cdZgTG3
www/ghost/config.js: http://pastebin.com/iunffMzN
Edit: /etc/nginx/conf.d/ folder is empty, so there isn't a conflict there.
Edit 2: The simplest config:
server {
listen 80;
server_name foobar.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:2368;
}
}
doesn't seem to work either. I did restart the nginx service after making this change and also modified Ghost's config.js accordingly and restarted its service.
Edit 3: There are no hidden or backup files in the sites-enabled folder.
Once you've confirmed that foobar.com isn't used in server_name anywhere else in your Nginx configs, look for $hostname or other Nginx variable, whose value might match your server_name:
Alphabetical index of Nginx variables
Example:
server {
server_name localhost $hostname;
listen 80;
access_log off;
}
Got me puzzled there for a while.
I have three sites configured on my server using NGINX and the first two are working fine. One is a static site and one is running Rails (using Unicorn). I have attempted to mirror the NGINX/Unicorn configurations.
For the non-working site, I get "problem loading site" in my browser and absolutely nothing in my NGINX error logs (even at debug level) or my Unicorn log. I also get nothing when I attempt to cURL to the site.
I have double checked DNS by pinging domain name and am running out of ideas. I've also tried making this the default server and browsing by IP address.
Thoughts on how I should go about debugging? I would like to at least understand if NGINX is seeing these requests or not.
NGINX configuration:
upstream unicorn-signup {
server unix:/home/signup/app/tmp/sockets/unicorn.sock;
}
server {
listen 80;
listen [::]:80;
root /home/signup/app/current/public;
server_name signup.quote2bill.com;
# configure for Unicorn (NGINX acts as reverse proxy)
location / {
try_files $uri #unicorn;
}
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded_Proto $scheme;
proxy_redirect off;
proxy_pass http://unicorn-signup;
proxy_read_timeout 300s;
proxy_send_timeout 300s;
}
}
Fixed! It was the dreaded force_ssl flag in my production configuration. For future travelers, here is how I went about troubleshooting:
Went on a Costco run to clear my mind and buy huge quantities of stuff.
To determine if it was a DNS, NGINX or Unicorn/Rails problem, I replaced my NGINX configuration with a very simple one and placed a simple index.html in my public root. This worked fine - which lets DNS off the hook (I could resolve the domain name at the web server).
I diff'd the working and non-working NGINX configuration files for the nth time and made them as close as possible but didn't find anything.
Then I noticed that when I was serving the simple index.html file in #2 above, the domain was not getting redirected to https:// but when switched to my "normal" Unicorn/Rails version, I was always getting redirected.
I searched for Rails redirecting to SSL and remembered the force_ssl flag.
I checked my two projects and noticed the flag was not set in the working project, but set in the non-working one (smoking gun).
I changed, committed, redeployed and reloaded the browser and it... didn't work (!) Fortunately, I had the good sense to clear browser cache and try again and it is all good now.
Hope this helps someone.
I am trying to build a docker-registry server from source (not as a container) on Ubuntu 14.04.1. I was able to get most of the way there using the instructions found on digitalocean.
I am able to curl http://localhost:5000 and https://user:password#localhost:8000 with no problems
When I try to open a web browser to see hopefully more than just that, that is when the issues seem to happen.
Here is my docker-registry file in /etc/nginx/sites-available/:
# For versions of Nginx > 1.3.9 that include chunked transfer encoding support
# Replace with appropriate values where necessary
upstream docker-registry {
server 192.168.x.x:5000;
}
server {
listen 8000;
server_name docker-registry;
ssl on;
ssl_certificate /etc/nginx/ssl/docker-registry.crt;
ssl_certificate_key /etc/nginx/ssl/docker-registry.key;
proxy_set_header Host $http_host; # required for Docker client sake
X-Real-IP $remote_addr; # pass on real client IP
client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads
# required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)
chunked_transfer_encoding on;
location / {
# let Nginx know about our auth file
auth_basic "Restricted";
auth_basic_user_file docker-registry.htpasswd;
proxy_pass http://docker-registry;
}
location /_ping {
auth_basic off;
proxy_pass http://docker-registry;
}
location /v1/_ping {
auth_basic off;
proxy_pass http://docker-registry;
}
}
I have my docker registry stored locally in /var/docker-registry and ensured that it was readable by the www-data user. Why can I not see my images on the web browser?
If I tag an image and push it to my repository it works, I can see it in the web browser:
https://192.168.x.x:8000/v1/repositories/ubuntu-test/tags/latest
I see the following:
"5ba9dab47459d81c0037ca3836a368a4f8ce5050505ce89720e1fb8839ea048a"
When I try to get to:
https://192.168.x.x:8000/v1
Or:
https://192.168.x.x:8000/v1/repositories
Or:
https://192.168.x.x:8000/v1/images
I get a "not found" error
How would I be able to see everything in my /var/docker-registry folder (which is where these are stored....and yes, they are owned by the www-data user) through the web interface?
This is by design. Not only is there no reason one would implement the entire url path, but there are severe security implications with implementing it.
I'm assuming you don't have much experience with web programming. There is no directory '/v1/repositories'... etc. Instead, there is a program (in this case either Python or Ruby) that is listening for the url path and has logic built-in to determine what to do.
i.e. if url = /v1/_ping: return 'ok'