certbot is not displaying all the certificates - nginx

I am using certbot with nginx plugin. Where I started with main.domain.com then I added subdomain.domain.com Whenever I issue certbot certificate the later is not displayed. I don't see a folder for subdomain at /etc/letsencrypt/live as well. But out of thin air the configuration it works like charm.
The issue is, I am getting a renewal due warning from certbot. Of course no entry at /etc/letsencrypt/renewal too.
What do I do now ?

Related

Browser shows letsencrypt certificate expired when it isnt

Can someone please render me some assistance
I have an issue where when accessing the domain sg.simpple.app results in an error indicating that the cert date is invalid
However when running certbot certificates it shows that the certificate is already up to date and has ample time till expiry
I have also restarted the server through
systemctl restart nginx
systemctl restart php-fpm
My suspicion is that it is using the wrong certificate, can someone please guide me in solving this issue?
Issue was with the filepath in nano /etc/nginx/conf.d/default.conf.
As the previous letsencrypt certificate had different domains to the new letsencryp certificate generated it didnt replace the original certificate.
had to manually change the filepath in nano /etc/nginx/conf.d/default.conf.

The nginx plugin is not working; there may be problems with your existing configuration

I'm trying to install certbot. I'm using Centos8 and following instructions from https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-centos-8
The error occurs when I run sudo certbot --nginx The error I get is:
The nginx plugin is not working; there may be problems with your existing config uration.
The error was: NoInstallationError("Could not find a usable 'nginx' binary.
Ensure nginx exists, the binary is executable, and your PATH is set correctly.",)

SSL: Certbot + AWS Lightsail + LetsEncrypt + Really Simple SSL Plugin

Scenario:
Current server # example.com is running an older version of amazon AWS Lightsail with wordpress (ubuntu) and we just had a new certificate issued using letsencrypt. All is well. Original cert was requested with wildcard, so functional for any subdomain.
Now, we needed to spin up a fresh new server for a subdomain, let's call it development.example.com.
The new AWS lightsail instances now are no longer Ubuntu but Debian!
The idea was to install certbot in the new Debian instance and then copy over the certificate files from the primary server # example.com.
I've done this successfully in the past when it was going from Ubuntu to Ubuntu but now that the new instance is Debian, the Really Simple SSL plugin does not recognize that a certificate is installed.
STEPS I took to move the certificate files:
What I've done before is simply to copy /etc/letsencrypt/* from one server to another and then follow the steps outlined in the AWS documentation here:
https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-using-lets-encrypt-certificates-with-wordpress#complete-the-prerequisites-lets-encrypt-wordpress
In this case, performing the steps 7.4, 7.5, 7.6 and section 8.
However, steps described in section 8.1 do not appear valid in this document anymore for Debian, because there is no such location on Debian:
sudo chmod 666 /opt/bitnami/apps/wordpress/htdocs/wp-config.php
AND because it seems an .htaccess does not exist either.
sudo chmod 666 /opt/bitnami/apps/wordpress/conf/htaccess.conf
Are there additional steps now which I've missed to be able to copy the necessary files for SSL to work properly on this new subdomain server now running Debian?
I was going to go through a new certificate request in the development server but wouldn't that invalidate the certificate currently installed for the primary domain?
In other words, how to properly copy the SSL files from the main Ubuntu server and configure the Debian subdomain server so that both wordpress installations have SSL correctly installed?
Thank you #mikemoy indeed, one can issue multiple wildcard certificates from different servers in a subdomain. Just went ahead and issued a new certificate.

Redirect default (80) port to 5000 - Flask + NGINX + Ubuntu

I'm successfully able to run a flask app on my IP:5000 path. A simple Hello World program that shows the output on my browser.
Now, what I would like to do is to configure NGINX with a proxy so that if I access only IP which apparently runs on a default port 80, it should navigate to port 5000 and show output of my application.
In other words...
This is working : IP:5000 -> Output = Hello world
This isn't working: IP -> This site can’t be reached
The server settings that I want to add would be something like this.
server {
listen 80;
server_name MY_IP;
location / {
proxy_pass http://127.0.0.1:5000;
}
}
However, I'm not sure where to add this? Should it be inside http block inside /etc/nginx/nginx.conf?
Updates: Based on the answers given below, I've managed to do the following.
I did restart nginx after this. However, I'm still facing the same issue. App works on IP:5000 but does not work on IP
The configuration you have mentioned should be in a separate file, assume example.com.conf under /etc/nginx/conf.d. You can put all the configuration in /etc/nginx/nginx.conf and it'll work, it's just that for readability we create separate configuration files which would be auto included when you add it inside conf.d.
Ok, the problem is fixed. As #senaps and #Mukanahallipatna had mentioned, I created the new configuration file under conf.d.
However, the most imp step that I was missing was this part mentioned in the below link.
It is recommended that you enable the most restrictive profile that will still allow the traffic you've configured. Since we haven't configured SSL for our server yet, in this guide, we will only need to allow traffic on port 80.
Reference Link
sudo ufw allow 'Nginx HTTP'
Now, everything is working fine.
Put the working blocks in a file with any_name.conf inside the folder named /etc/nginx/conf.d and it will be loaded automatically.
You will need to restart your nginx.
update:
What are you using to serve flask? if you are using uwsgi, then you should use configurations like this:
include uwsgi_params;
uwsgi_pass unix:path_to_your.sock;
Other options for uwsgi_pass are:
uwsgi_pass localhost:9000; #normal
uwsgi_pass uwsgi://localhost:9000;
uwsgi_pass suwsgi://[2001:db8::1]:9090; #uwsgi over ssl
If you are using gunicorn to serve your flask app, then your current configs should be fine, check if your app is running and if you can get your index page or not using 5000 port, then check for other problems. your configs looks good, maybe it's a problem on flask not being run?

Nginx server_name regex match not setting passenger_app_env

Nginx: Built with passenger-install-nignx-module
Passenger Version: 5.0.28
OS: Ubuntu 14.04
I have symlinked each of my apps into their own set of environment folders:
/Repository
/development.manager
/app
...
/test.manager
/staging.manager
...
Where the actual folders is at another location on my HDD. All of these folders are symlinks pointing to that one folder.
The problem is that Nginx doesn't seem to be setting the passenger environment variable properly. Checking the logs it throws an app error that doesn't make sense (and the nginx config is the only thing that's changed since things broke). Also, the error page showing states:
Because you are running this web application in staging or production
mode, the details of the error have been omitted from this web page
for security reasons.
Which means that it's not using the development environment even though the root directory in the logs shows development.manager. This is when I access through the url: http://manager-development/.
Here's the relevant excerpt from my nginx sites-enabled configuration:
server {
listen 80;
server_name ~^manager-(?<environment>development|test)$;
passenger_app_env $environment;
passenger_ruby /home/vagrant/.rvm/gems/ruby-2.3.1#manager/wrappers/ruby;
passenger_enabled on;
root /home/vagrant/apps/$environment.manager/public;
client_max_body_size 30M;
}
I have a feeling the solution might be a combination of an answer I provided here as well as a possibly misconfigured nginx block.
EDIT: I explicitly raised an error in my rails app that output the environment as a string and it's literally "$environment"...
I've given up on this approach as it seems variables aren't interpreted by nginx when used in certain places. I'm now using a custom Bash/Ruby script to iterate over my environments/app names and generate the configuration blocks.

Resources