Could not choose appropriate plugin: The nginx plugin is not working; - nginx

Processing /etc/letsencrypt/renewal/api.shunhinggaoke.com.conf
Cert not due for renewal, but simulating renewal for dry run
Could not choose appropriate plugin: The nginx plugin is not working; there may be problems with your existing configuration.
The error was: NoInstallationError()
Attempting to renew cert (api.shunhinggaoke.com) from /etc/letsencrypt/renewal/api.shunhinggaoke.com.conf produced an unexpected error: The nginx plugin is not working; there may be problems with your existing configuration.
The error was: NoInstallationError(). Skipping.
All renewal attempts failed. The following certs could not be renewed:
/etc/letsencrypt/live/api.shunhinggaoke.com/fullchain.pem (failure)
** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates below have not been saved.)
All renewal attempts failed. The following certs could not be renewed:
/etc/letsencrypt/live/api.shunhinggaoke.com/fullchain.pem (failure)
** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates above have not been saved.)
1 renew failure(s), 0 parse failure(s)

I just came across your question and don't know if you still have the issue. You need to post more details. You don't even indicate on which platform you have the problem. I assume it's on a Linux machine. If so, are you running the certbot renew --dry-run command from the command line or from a cron script?
The problem above indicates an environment problem, most likely a PATH issue and I assume you only get this when running the script via cron. I also assume that you added it to your crontab using the crontab -e or sudo crontab -e command.
The solution commonly proposed is to set your PATH in the crontab file itself and try again. That means, don't do a (sudo) crontab -e but make sure that either your system crontab in the /etc/crontab file has it set or you set it in the crontab file for certbot.
For me on Ubuntu 16.04 it's /etc/crond.d/certbot and it looks like this:
#lots of commented lines preceding ...
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e'sleep int(rand(43200))' && certbot -q renew
It runs twice a day. You can test it by adding another line like this:
* * * * * root /usr/bin/certbot renew --quiet --dry-run
and then watch your certbot log. It's probably in /var/log/letsencrypt/letsencrypt.log and read protected. Do a
sudo tail -f /var/log/letsencrypt/letsencrypt.log
and see what you get. You should get no renewal failures at the end of the run if it works ok. The script runs every minute so turn it off after you got the logs.

Check certbot-nginx plugin is installed first.
apt-get install python3-certbot-nginx solved the similar issue in my Debian.

Related

How do I fix 502 Bad Gateway error with GCP and NGINX

I'm trying to follow a tutorial on creating an Apache Airflow pipeline on a GCP vm instance (https://towardsdatascience.com/10-minutes-to-building-a-machine-learning-pipeline-with-apache-airflow-53cd09268977) but after building and running the docker container, I get this "502 Bad Gateway" error with Nginx 1.14 when try to access the webserver using:
http://<VM external ip>/
I'm quite new to using GCP and can't figure out how to fix this.
Some online research has suggested editing NGINX configuration files to:
keepalive_timeout 650;
keepalive_requests 10000;
But this hasn't changed anything.
The GCP instance is a N1-standard-8 with Ubuntu 18.04, and Cloud, HTTPS and HTTP access enabled.
The Nginx sites enabled are :
server {
listen 80;
location / {
proxy_pass http://0.0.0.0:8080/;
}
}
Root Cause:
The issue the you experience has nothing to do with keepalives, it is rather simpler - the docker container exits out and isn't running, so when nginx tries to proxy your request into the container, it fails and thus the error. Said failure is due to the incompatibility of airflow with current versions of sqlalchemy.
Verification:
run this command to see the logs of the failed container
sudo docker logs `sudo docker ps -a -f "ancestor=greenr-airflow" --format '{{.ID}}'`
and you will see that the python inside the container fails to import a package with the following error:
No module named 'sqlalchemy.ext.declarative.clsregistry'
Solution:
While I followed the tutorial to the letter, I'd recommend against
running commands with sudo you may want to deviate from the tutorial a
wee bit in order not to.
before running
sudo docker build -t greenr-airflow:latest .
command, edit the Dockerfile file and add the following two lines
&& pip install SQLAlchemy==1.3.23 \
&& pip install Flask-SQLAlchemy==2.4.4 \
somewhere up in the list of packages that are being installed, I've added it after
&& pip install -U pip setuptools wheel \
which is line 54 at the time of writing.
If you would like to re-use the same instance, delete and rebuild the images after making changes to the file:
sudo docker rmi greenr-airflow
sudo docker build -t greenr-airflow:latest .

Cronjob Not Running On Root User

I have the following cronjob set for my root user on my VPS. The command itself works if I run it as the root users just fine.
30 2 * * * service nginx stop && /opt/letsencrypt/letsencrypt-auto renew && service nginx start > /dev/null
However it isn't working as I can see that my SSL certificate isn't renewing.
If I run cat /var/log/cron I can see the following
Nov 13 02:30:01 server CROND[2307]: (root) CMD (service nginx stop && /opt/letsencrypt/letsencrypt-auto renew && service nginx start > /dev/null)
Which seems to indicate it ran, but clearly it hasn't done what it's supposed to.
Other cronjobs on my normal user seem to work fine, however I can't use that user for the crons as I need to temporarily stop Nginx for this cron.
Any ideas on how I can further debug this and sort it out?
* Edit: *
I tried running it to log and the following error was shown in the log /bin/sh: service: command not found
The problem was that the service command was not in the crons PATH so I had to use the full path which is /sbin/service.
That meant my final cron command was
30 2 * * * (/sbin/service nginx stop && /opt/letsencrypt/letsencrypt-auto renew && /sbin/service nginx start) > /dev/null 2>&1
Which is now working.
It looks like as a general rule, specifying the full path to any shell commands is best practice.

curl can't connect to only certain HTTPS hosts

I am trying to install Meteor.js on a VM (Ubuntu 12.04) created with Vagrant.
The install should be as simple as:
curl https://install.meteor.com | /bin/sh
However this fails with curl: (7) couldn't connect to host
I have isolated the failure to a request within that shell script to this URL:
https://warehouse.meteor.com/bootstrap/0.7.0.1/meteor-bootstrap-Linux_i686.tar.gz
When I changed it to use HTTP instead of HTTPS it works. However I am running into problems elsewhere where it needs to pull things from httpS://warehouse.meteor.com/...
I thought the problem was with https, but if I do:
curl https://google.com
I get the page no problem, so what could be the issue?
Per another Ubuntu/Meteor question, it appears that there's some kind of certificate error (Meteor's SSL CA may not be installed by default in Ubuntu?) that goes away when you:
sudo apt-get update && sudo apt-get upgrade
For me upgrade didn't solve the problem.
My solution was to download the script from install.meteor.com and replace TARBALL_URL from https to http and I ran the script manually.

Cannot get cron to work on Amazon EC2?

I've spent two days trying to understand why I can not get cron to work on my Ubuntu EC2 instance. I've read the documentation. Can anyone help? All I want is to get a working cronjob.
I am using a simple wget command to test cron. I have verified that this works manually from the command line:
/usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
My crontab file looks like this:
02 * * * * /usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
I have single spaces between the commands and I have a blank line below the command. I've also tried to execute this command from the system level sudo crontab -e. It still doesn't work.
The cron daemon is running:
ps aux | grep crond
ubuntu 2526 0.0 0.1 8096 928 pts/4 S+ 10:37 0:00 grep crond
The cronjob appear to be running:
$ crontab -l
02 * * * * /usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
Does anyone have any advice or possible solutions?
Thanks for your time.
Cron can be run in Amazon-based linux server just like in any other linux server.
Login to console with SSH.
Run crontab -e on the command line.
You are now inside a vi editor of the crontab of the current user (which is by default the console user, with root permissions)
To test cron, add the following line: * * * * * /usr/bin/uptime > /tmp/uptime
Now save the file and exit vi (press Esc and enter :wq).
After a minute or two, check that the uptime file was created in /tmp (cat /tmp/uptime).
Compare it with the current system uptime by typing the uptime command on the command line.
The scenario above worked successfully on a server with the Amazon Linux O/S installed, but it should work on other linux boxes as well. This modifies the crontab of the current user, without touching the system's crontabs and doesn't require the user inside the crontab entry, since you are running things under your own user. Easier, and safer!
Your cron daemon is not running. When you're running ps aux | grep crond the result is showing that only the grep command is running. Be aware of this whenever you run ps aux | grep blah.
Check the status of the cron service by running this command.
Try:
sudo service crond status
Additional information here: http://www.cyberciti.biz/faq/howto-linux-unix-start-restart-cron/.
On some AWS Ubuntu EC2 machines, cron jobs cannot be edited or made to run by using crontab -e or even sudo crontab -e (for whatever reason). I was able to get cron jobs working by:
touch /home/ubuntu/crontest.log to create a log file
sudo vim /etc/crontab which edits the system-wide crontab
add your own cron job on the second to last line using the root user, such as * * * * * root date && echo 'It works!'>> /home/ubuntu/crontest.log 2>&1 which dumps stdout and stderr into the logfile you created in step 1
Verify it is working by waiting 1 minute and then cat /home/ubuntu/crontest.log to see the output of the cron job
Don't forget to specify the user to run it as. Try creating a new file inside your /etc/cron.d folder named after what you want to do like getnytimes and have the contents of that file just be:
02 * * * * root /usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
In my case the cron job was working but the script it was running failed. The failure reason was due to the fact that I used relative path instead of absolute path in my include line inside the script.
What did the trick for me was
Make sure the crontab was active:
sudo service crond status
Restart the crontab by running:
sudo service crond restart
Reschedule the cron job as usual:
crontab -e
running
/usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
gives me an error
/home/ubuntu/backups/testfile: No such file or directory
is this your issue?
I guess cron is not writing this error to anywhere you can redirect stderr to stdout and see the error like this :
02 * * * * /usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/ > /home/ubuntu/error.log 2&>1

Restart nginx without sudo?

So I want to be able to cap:deploy without having to type any passwords. I have setup all private keys so I can get to the remote servers fine, and am now using svn over ssh, so no passwords there.
I have one last problem, I need to be able to restart nginx. Right now I have sudo /etc/init.d/nginx reload. That is a problem b/c it uses the capistrano password, the one I just removed b/c I am using keys. Any ideas on how to restart nginx w\out a password?
I just spent a good hour looking at sudoer wildcards and the like trying to solve this exact problem. In truth, all you really need is a root executable script that restarts nginx.
Add this to the /etc/sudoers file
username hostname ALL=NOPASSWD: /path/to/script
Write script as root
#! /bin/bash
/bin/kill -HUP `cat /var/run/nginx.pid`
Make the script executable
Test.
sudo /path/to/script
There is a better answer on Stack Overflow that does not involve writing a custom script:
The best practice is to use /etc/sudoers.d/myusername
The /etc/sudoers.d/ folder can contain multiple files that allow users
to call stuff using sudo without being root.
The file usually contains a user and a list of commands that the user
can run without having to specify a password.
Instructions:
In all commands, replace myusername with the name of your user that you want to use to restart nginx without sudo.
Open sudoers file for your user:
$ sudo visudo -f /etc/sudoers.d/myusername
Editor will open. There you paste the following line. This will allow that user to run nginx start, restart, and stop:
myusername ALL=(ALL) NOPASSWD: /usr/sbin/service nginx start,/usr/sbin/service nginx stop,/usr/sbin/service nginx restart
Save by hitting ctrl+o. It will ask where you want to save, simply press enter to confirm the default. Then exit out of the editor with ctrl+x.
Now you can restart (and start and stop) nginx without password. Let's try it.
Open new session (otherwise, you might simply not be asked for your sudo password because it has not timed out):
$ ssh myusername#myserver
Stop nginx
$ sudo /usr/sbin/service nginx stop
Confirm that nginx has stopped by checking your website or running ps aux | grep nginx
Start nginx
$ sudo /usr/sbin/service nginx start
Confirm that nginx has started by checking your website or running ps aux | grep nginx
PS: Make sure to use sudo /usr/sbin/service nginx start|restart|stop, and not sudo service nginx start|restart|stop.
Run sudo visudo
Append with below lines (in this example you can add multiple scripts and services after comma)
# Run scripts without asking for pass
<your-user> ALL=(root) NOPASSWD: /opt/fixdns.sh,/usr/sbin/service nginx *,/usr/sbin/service docker *
Save and exit with :wq
Create a rake task in Rails_App/lib/capistrano/tasks/nginx.rake and paste below code.
namespace :nginx do
%w(start stop restart reload).each do |command|
desc "#{command.capitalize} Nginx"
task command do
on roles(:app) do
execute :sudo, "service nginx #{command}"
end
end
end
end
Then ssh to your remote server and open file
sudo vi /etc/sudoers
and the paste this line (after line %sudo ALL=(ALL:ALL) ALL)
deploy ALL=(ALL:ALL) NOPASSWD: /usr/sbin/service nginx *
Or, as in your case,
deploy ALL=(ALL:ALL) NOPASSWD: /etc/init.d/nginx *
Here I am assuming your deployment user is deploy.
You can add here other commands too for which you dont require to enter password. For example
deploy ALL=(ALL:ALL) NOPASSWD: /usr/sbin/service nginx *, /etc/init.d/mysqld, /etc/init.d/apache2

Resources