Bitnami Worpdress Lets Encrypt Wildcard Certificate - wordpress

I am having real troubles with a wildcard certificate for a server. It is a server on AWS running the Bitnami WordPress Multisite.
I was able to install the wildcard certificate, but when the renewal was due the process didn't seem to be in place. I have tried to run this manually with:
GODADDY_API_KEY={someKey} \
GODADDY_API_SECRET={someSecret} \
sudo /opt/bitnami/letsencrypt/lego --email="admin#domain.com" --domains="*.domain.com" --domains="domain.com" --dns godaddy --path="/opt/bitnami/letsencrypt" renew
But I keep getting the same issue:
godaddy: some credentials information are missing: GODADDY_API_KEY,GODADDY_API_SECRET
Any ideas?
I have tried to run the code in a shell script
godaddy.sh
GODADDY_API_KEY={someKey} \
GODADDY_API_SECRET={someSecret} \
sudo /opt/bitnami/letsencrypt/lego --email="admin#domain.com" --domains="*.domain.com" --domains="domain.com" --dns godaddy --path="/opt/bitnami/letsencrypt" renew
Same result
Also tried godaddy.sh
export GODADDY_API_KEY "{someKey}"
export GODADDY_API_SECRET "{someSecret}"
sudo /opt/bitnami/letsencrypt/lego --email="admin#domain.com" --domains="*.domain.com" --domains="domain.com" --dns godaddy --path="/opt/bitnami/letsencrypt" renew

Related

How do I create a FTP user in Amazon Lightsail to update Wordpress Plugins

I successfully migrated a Wordpress site from BlueHost to AWS Lightsail. When I go to update the plugins, Wordpress is asking for FTP credentials (see the image).
By default, you can only connect to the Lightsail instance via SSH Certificate, which I have successfully done via Transit.
In your lightsail firewall rules make sure you allow access to TCP ports 21 and 1024-1048 from 127.0.0.1
SSH to your Lightsail instance (use putty for windows unless you know how to edit files with vim)
run the following commands to install vsftpd.
sudo apt install vsftpd
sudo nano /etc/vsftpd.conf
uncomment these lines:
local_enable=YES
write_enable=YES
add these lines:
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=127.0.0.1
Press Ctrl+X , Y , ENTER to save the changes to the file (this is why I said to use putty)
run this command to see what group owns the wp-content directory
ls -l /home/bitnami/apps/wordpress/htdocs/
In my lightsail instance, it was the "daemon" group
Note:other articles suggest adding this user to the bitnami group, but in my experience this resulted in errors during update siting that it was not able to create directories.
Run the following to create a new user and assign it to this group so that it will have access to write to the wp-content directory.
(in the following lines, substitute ftpuser for the new username)
sudo /etc/init.d/vsftpd restart
sudo adduser ftpuser
sudo usermod -d /home/bitnami ftpuser
sudo usermod -a -G daemon ftpuser
sudo /etc/init.d/vsftpd restart
Now you can try your updates again and it should work.
use 127.0.0.1 for the hostname and specify the new FTPuser credentials you just created.

How to migrate existing domain with ssl certificate from CentOS/Apache to Docker/Nginx?

We have a site running on CentOS/PHP/Apache stack. We want to migrate the whole site to Docker/PHP-FPM/Nginx using docker-compose.
So far we've set up plans for migrating pretty much everything except the domain and the existing ssl certificate .
How do we go about this ?
Nginx is up and running on port 80
ports:
- '9007:80'
How can we redirect the existing domain to the docker container and also use the existing ssl certificate ?
No need for the hassle, someone already did the work for you:
https://github.com/evertramos/docker-compose-letsencrypt-nginx-proxy-companion
Its a fully configured auto-ssl docker, which does basically exactly what you need. Start your Website-Container with the following additional parameters (from the git-repo):
docker run -d -e VIRTUAL_HOST=your.domain.com \
-e LETSENCRYPT_HOST=your.domain.com \
-e LETSENCRYPT_EMAIL=your.email#your.domain.com \
--network=webproxy \
--name my_app \
httpd:alpine
I can only recommend it, its a great solution for hosting multiple projects on one server.

certbot SSL certifact stops working on nginx configuration update

I have a Django application setup CI/CD via Bitbucket on AWS EC2 via AWS CodeDeploy.
In the AWS CodeDeploy hooks under AfterInstall
hooks:
AfterInstall:
- location: scripts/ngnix.sh
timeout: 6000
runas: ubuntu
and the nginx.sh script is
#!/usr/bin/env bash
mkdir -p /etc/nginx/sites-enabled
mkdir -p /etc/nginx/sites-available
sudo mkdir -p /etc/nginx/log/
sudo unlink /etc/nginx/sites-enabled/*
sudo cp /path_to_app/configs/nginx.conf /etc/nginx/sites-available/app-host.conf
sudo ln -s /etc/nginx/sites-available/app-host.conf /etc/nginx/sites-enabled/app-host.conf
sudo /etc/init.d/nginx stop
sudo /etc/init.d/nginx start
sudo /etc/init.d/nginx status
But every time this script is run via CI/CD pipeline, SSL stops working and the website is not accessible using https.
To re-enable SSL, I have to manually run
sudo certbot --nginx
And re-configure SSL certificate.
What could be the issue for not working of the SSL and how to automate this?
The certbot procures the ssl certificates from Lets Encrypt and keeps those certificates on your machine. You can run the command sudo certbot certificates to see the certificates path.
Found the following certs:
Certificate Name: example.com
Domains: example.com, www.example.com
Expiry Date: 2017-02-19 19:53:00+00:00 (VALID: 30 days)
Certificate Path: /etc/letsencrypt/live/example.com/fullchain.pem
Private Key Path: /etc/letsencrypt/live/example.com/privkey.pem
You need to store the the files located at Certificate Path & Private Key Path in a persisted volume so they don't get wiped out everytime you deploy your app. In your case I think these certificate files are getting wiped out and that is the reason you have to run the command sudo certbot --nginx to procure new cerificate.

How to Install SSL on AWS EC2 WordPress Site

I've created and launched my WordPress site on AWS using EC2. I followed this tutorial to create the site. Its currently mapped to a domain using Route 53. All development on the site is done online in my instance.
I would now like to install an SSL Certificate on my site. How would I do so?
If you created WordPress on AWS using "Bitnami",
you may ssh to your instance and run:
sudo /opt/bitnami/bncert-tool
See bitnami docs for details
If you're looking for easy and free solution, try https://letsencrypt.org/. They have a easy to follow doc for anyone.
TLDR; Head to https://certbot.eff.org/, choose your OS and server type and they will give you 4-5 line installation to install certificate automatically.
Before attempting, make sure your domain name is correctly pointed to your EC2 using Route53 or Elastic IP.
For example, here's all you need to run to automatically get and install SSL on a Ubuntu EC2 running nginx
$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-nginx
Best of luck!
This tutorial provides a simple 3 step guide to setting up your Wordpress on AWS using LetsEncrypt / Certbot:
https://blog.brainycheetah.com/index.php/2018/11/02/wordpress-switching-to-https-ssl-hosted-on-aws/
Step 1: Get SSl certificate
Step 2: Configure redirects
Step 3: Update firewall
At each stage replace 'example.com' with your own site address.
Install certbot:
$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-apache
Create certificates:
$ sudo certbot --apache -m admin#example.com -d example.com -d www.example.com
To configure redirects, first open the wp-config file:
$ sudo vim /var/www/html/example.com/wp-config.php
Insert the following above the "stop editing" comment line:
// HTTPS configuration
define('WP_HOME','https://example.com');
define('WP_SITEURL','https://example.com');
define('FORCE_SSL_ADMIN', true);
And finally, update firewall via the AWS console:
Login to your AWS control panel for your EC2 / Lightsail instance
Select the Networking tab Within the Firewall section, just below
the table
Select Add another
Custom and TCP should be pre-populated within the first two fields by default, leave these as they are
Within the Port range field enter 443 Select Save
Then just reload your apache config:
sudo service apache2 reload
And you should be good to go.
According to the Tutorial, since you have configured only an EC2 instance, direct approach is to purchase a SSL certificate and install it into apache server. For detailed steps follow the tutorial
HOW TO ADD SSL AND HTTPS IN WORDPRESS
How to Add SSL and HTTPS in WordPress.
If you plan to use AWS Certificate Manager issued free SSL certificates, then it requires either to configure a Elastic Load Balancer or the CDN CloudFront. This can get complicated if you are new to AWS. If you plan to give it a try with AWS Cloudfront, follow the steps in How To Use Your Own Secure Domain with CloudFront.
Using Cloudfront also provides a boost in performance since it caches your content and reduces the load from your EC2 instance. However one of the challenges you will face is to avoid mixcontent issues. There are WordPress plugins that are capable of resolving mixcontent issues, so do try them out.
This is how I enabled SSL on my WordPress website.
I have used the Lets Encyprpt X.509 Certificates. Lets Encrypt is a certificate authority that provides x.509 Certificates in an automated fashion for free. You can find more information about lets encrypt [here][2]
Steps to follow:
SSH into the instance and switch to root.
Download Certbot
wget https://dl.eff.org/certbot-auto
Chmod a+x certbot-auto
Run certbot to fetch the certificates
sudo ./certbot-auto --debug -v --server https://acme-v01.api.letsencrypt.org/directory certonly -d "your-domain-name"
A wizard would be launched asking you select options for Apache, WebRoot, and Standalone. Select the WebRoot option and continue.Note the directory of your domain
Usually /var/www/html will be your directory for your domain. After success you will have three certificates in the following paths
Certificate: /etc/letsencrypt/live/<<<"Domain-Name">>>/cert.pem
Full Chain: /etc/letsencrypt/live/<<<"Domain-Name">>>/fullchain.pem
Private Key: /etc/letsencrypt/live/<<<"Domain-Name">>>/privkey.pem
Copy the pem file paths to /etc/httpd/conf.d/ssl.conf. Then restart the apache
Service httpd restart
And Finally, I have enabled the Really Simple SSL Plugin in wordpress. Thats it!

FTP files up to Amazon AWS server

I have not been able to find an answer to this question, although there have been some similar questions. So I have a Amazon EC2 that I launched using a "Magento quick start" on the Amazon marketplace. Anyway so the site is working I can get to it via my domain and can also ssh in.
However when I login via filezilla I am using my key and also the username ubuntu . Now when I try to ftp something into my var/www/magento folder I get a permission denied. The default owner/group is nginx nginx
If I do a
sudo chown -R ubuntu:nginx var/www/magento
I am then able to ftp files up to the server. However when I go in the browser to the url the site then gives me an "this site can not be reached" error in chrome browser. However if I
sudo chown -R nginx:nginx var/www/magento
I am then able to see the site in the browser but am back to not being able to upload anything.
I also tried doing
sudo adduser ubuntu nginx
I got a success message but still not able to ftp. I get a permission denied error
So what is the ssh command(s) that would enable me to login to sftp with user ubuntu and be able to upload files and even change file permissions without making the site no longer load when you type in its domain name?
I believe I maybe just need to add the "ubuntu" user to the nginx group??
If so what would that command be?
thanks
try sudo usermod -a -G nginx ubuntu this will add your existing ubuntu user to nginx group. Also, log out from any existing session and login back.

Resources