I've created and launched my WordPress site on AWS using EC2. I followed this tutorial to create the site. Its currently mapped to a domain using Route 53. All development on the site is done online in my instance.
I would now like to install an SSL Certificate on my site. How would I do so?
If you created WordPress on AWS using "Bitnami",
you may ssh to your instance and run:
sudo /opt/bitnami/bncert-tool
See bitnami docs for details
If you're looking for easy and free solution, try https://letsencrypt.org/. They have a easy to follow doc for anyone.
TLDR; Head to https://certbot.eff.org/, choose your OS and server type and they will give you 4-5 line installation to install certificate automatically.
Before attempting, make sure your domain name is correctly pointed to your EC2 using Route53 or Elastic IP.
For example, here's all you need to run to automatically get and install SSL on a Ubuntu EC2 running nginx
$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-nginx
Best of luck!
This tutorial provides a simple 3 step guide to setting up your Wordpress on AWS using LetsEncrypt / Certbot:
https://blog.brainycheetah.com/index.php/2018/11/02/wordpress-switching-to-https-ssl-hosted-on-aws/
Step 1: Get SSl certificate
Step 2: Configure redirects
Step 3: Update firewall
At each stage replace 'example.com' with your own site address.
Install certbot:
$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-apache
Create certificates:
$ sudo certbot --apache -m admin#example.com -d example.com -d www.example.com
To configure redirects, first open the wp-config file:
$ sudo vim /var/www/html/example.com/wp-config.php
Insert the following above the "stop editing" comment line:
// HTTPS configuration
define('WP_HOME','https://example.com');
define('WP_SITEURL','https://example.com');
define('FORCE_SSL_ADMIN', true);
And finally, update firewall via the AWS console:
Login to your AWS control panel for your EC2 / Lightsail instance
Select the Networking tab Within the Firewall section, just below
the table
Select Add another
Custom and TCP should be pre-populated within the first two fields by default, leave these as they are
Within the Port range field enter 443 Select Save
Then just reload your apache config:
sudo service apache2 reload
And you should be good to go.
According to the Tutorial, since you have configured only an EC2 instance, direct approach is to purchase a SSL certificate and install it into apache server. For detailed steps follow the tutorial
HOW TO ADD SSL AND HTTPS IN WORDPRESS
How to Add SSL and HTTPS in WordPress.
If you plan to use AWS Certificate Manager issued free SSL certificates, then it requires either to configure a Elastic Load Balancer or the CDN CloudFront. This can get complicated if you are new to AWS. If you plan to give it a try with AWS Cloudfront, follow the steps in How To Use Your Own Secure Domain with CloudFront.
Using Cloudfront also provides a boost in performance since it caches your content and reduces the load from your EC2 instance. However one of the challenges you will face is to avoid mixcontent issues. There are WordPress plugins that are capable of resolving mixcontent issues, so do try them out.
This is how I enabled SSL on my WordPress website.
I have used the Lets Encyprpt X.509 Certificates. Lets Encrypt is a certificate authority that provides x.509 Certificates in an automated fashion for free. You can find more information about lets encrypt [here][2]
Steps to follow:
SSH into the instance and switch to root.
Download Certbot
wget https://dl.eff.org/certbot-auto
Chmod a+x certbot-auto
Run certbot to fetch the certificates
sudo ./certbot-auto --debug -v --server https://acme-v01.api.letsencrypt.org/directory certonly -d "your-domain-name"
A wizard would be launched asking you select options for Apache, WebRoot, and Standalone. Select the WebRoot option and continue.Note the directory of your domain
Usually /var/www/html will be your directory for your domain. After success you will have three certificates in the following paths
Certificate: /etc/letsencrypt/live/<<<"Domain-Name">>>/cert.pem
Full Chain: /etc/letsencrypt/live/<<<"Domain-Name">>>/fullchain.pem
Private Key: /etc/letsencrypt/live/<<<"Domain-Name">>>/privkey.pem
Copy the pem file paths to /etc/httpd/conf.d/ssl.conf. Then restart the apache
Service httpd restart
And Finally, I have enabled the Really Simple SSL Plugin in wordpress. Thats it!
Related
I am having real troubles with a wildcard certificate for a server. It is a server on AWS running the Bitnami WordPress Multisite.
I was able to install the wildcard certificate, but when the renewal was due the process didn't seem to be in place. I have tried to run this manually with:
GODADDY_API_KEY={someKey} \
GODADDY_API_SECRET={someSecret} \
sudo /opt/bitnami/letsencrypt/lego --email="admin#domain.com" --domains="*.domain.com" --domains="domain.com" --dns godaddy --path="/opt/bitnami/letsencrypt" renew
But I keep getting the same issue:
godaddy: some credentials information are missing: GODADDY_API_KEY,GODADDY_API_SECRET
Any ideas?
I have tried to run the code in a shell script
godaddy.sh
GODADDY_API_KEY={someKey} \
GODADDY_API_SECRET={someSecret} \
sudo /opt/bitnami/letsencrypt/lego --email="admin#domain.com" --domains="*.domain.com" --domains="domain.com" --dns godaddy --path="/opt/bitnami/letsencrypt" renew
Same result
Also tried godaddy.sh
export GODADDY_API_KEY "{someKey}"
export GODADDY_API_SECRET "{someSecret}"
sudo /opt/bitnami/letsencrypt/lego --email="admin#domain.com" --domains="*.domain.com" --domains="domain.com" --dns godaddy --path="/opt/bitnami/letsencrypt" renew
I successfully migrated a Wordpress site from BlueHost to AWS Lightsail. When I go to update the plugins, Wordpress is asking for FTP credentials (see the image).
By default, you can only connect to the Lightsail instance via SSH Certificate, which I have successfully done via Transit.
In your lightsail firewall rules make sure you allow access to TCP ports 21 and 1024-1048 from 127.0.0.1
SSH to your Lightsail instance (use putty for windows unless you know how to edit files with vim)
run the following commands to install vsftpd.
sudo apt install vsftpd
sudo nano /etc/vsftpd.conf
uncomment these lines:
local_enable=YES
write_enable=YES
add these lines:
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=127.0.0.1
Press Ctrl+X , Y , ENTER to save the changes to the file (this is why I said to use putty)
run this command to see what group owns the wp-content directory
ls -l /home/bitnami/apps/wordpress/htdocs/
In my lightsail instance, it was the "daemon" group
Note:other articles suggest adding this user to the bitnami group, but in my experience this resulted in errors during update siting that it was not able to create directories.
Run the following to create a new user and assign it to this group so that it will have access to write to the wp-content directory.
(in the following lines, substitute ftpuser for the new username)
sudo /etc/init.d/vsftpd restart
sudo adduser ftpuser
sudo usermod -d /home/bitnami ftpuser
sudo usermod -a -G daemon ftpuser
sudo /etc/init.d/vsftpd restart
Now you can try your updates again and it should work.
use 127.0.0.1 for the hostname and specify the new FTPuser credentials you just created.
I have installed latest version of nginx.It is is installed succefully.
But getting error while typing the below command.
sudo ufw allow 'Nginx Full'
Error:ERROR: Could not find a profile matching 'Nginx Full'
sudo ufw app list
showing only
Available applications:
OpenSSH
How to add the application.
Nginx Full
Nginx HTTP
Nginx HTTPS
OpenSSH
I have installed two times nginx server
Error:ERROR: Could not find a profile matching 'Nginx Full'
Ubuntu (18.04)
You can see which apps are available by running this command:
ufw app list
Ports:
HTTP - 80
HTTPS - 443
Simple way to add them to UFW:
ufw allow 80,443/tcp
If you are wanting to accomplish this via application you will need to create the application ini file within /etc/ufw/applications.d
Example:
vi /etc/ufw/applications.d/nginx.ini
Place this inside file
[Nginx HTTP]
title=Web Server
description=Enable NGINX HTTP traffic
ports=80/tcp
[Nginx HTTPS] \
title=Web Server (HTTPS) \
description=Enable NGINX HTTPS traffic
ports=443/tcp
[Nginx Full]
title=Web Server (HTTP,HTTPS)
description=Enable NGINX HTTP and HTTPS traffic
ports=80,443/tcp
Then type this commands
ufw app update nginx
ufw app info 'Nginx HTTP'
ufw allow 'Nginx HTTP'
I had the same problem.. turned out Nginx was not installed due to some reason.
So it showed only OpenSSH by doing
sudo ufw app list
I got to this when I tried to uninstall Nginx using the command
sudo apt-get remove nginx
The output showed something like this:
Package 'nginx' is not installed, so not removed
Now you have to try installing Nginx again using commands
sudo apt update
sudo apt install nginx
sudo ufw app list
now the options will be available
// Check to see
Available applications:
Nginx Full
Nginx HTTP
Nginx HTTPS
OpenSSH
Now allow HTTP port using the command:
sudo ufw allow 'Nginx HTTP'
And finally run this command:
sudo ufw enable
Now hit the URL in browser it will show Nginx default page.
ERROR: Could not find a profile matching 'OpenSSH', Then install first ssh by given command
sudo apt-get install ssh
After installing package add the OpenSSH allow
sudo ufw allow OpenSSH
sudo ufw status
Tested
Happened to me after installing using the official site's instructions for Ubuntu
Simply install as this (after removing if already installed)
sudo apt-get remove nginx
sudo apt install nginx
Currently I'm trying to follow this guide:
https://marxtudor.com/how-to-install-wordpress-using-ssh-on-centos-vps/
I'm using Google Cloud Platform (free edition to test) and I've created a fresh CentOS 7 VM. The guide above are the first commands I fill in and I keep getting this error:
I've followed so many tutorials, created a new VM and all the time I bump into this error that it doesn't know the httpd command.. I even deleted the project and started all over, but still no luck.
[rsa-key-XXXXXX]$ sudo service httpd restart
Redirecting to /bin/systemctl restart httpd.service
Failed to restart httpd.service: Unit not found.
[rsa-key-XXXXXX]$ httpd -t
-bash: httpd: command not found
[rsa-key-XXXXXX]$
Could anyone please let me know what could be causing this ?
Thanks in advance!
I was also getting the same error, this is how i resolved my issue.
After logging to the machine:
Step 1: Become the root user.
command: sudo su
Step 2: Update Kernal
command: yum update -y
Step 3: Install Apache command: yum install
httpd -y
Step 4: Start Apache command: service httpd start
Step 5: Check Status of Service command: service httpd status
This should solve your problem. good luck
Do you want to install WordPress for your Compute Engine VM instance, using CentOS 7?
If this is the case, you may do so by setting up LAMP for your VM, as described here [1], and then download the WordPress release of your choice [2] and install it on your VM.
I understand that you have successfully set up a VM instance using Centos 7, is this correct? Assuming this, and as you may see from [1], for CentOS 7, these would be the commands to perform this installation:
1) Update and install Apache and PHP:
sudo yum check-update
sudo yum -y install httpd php
2) Start the Apache service:
sudo service httpd start
sudo chkconfig httpd on
3) Install, configure and start DB:
sudo yum -y install httpd mariadb-server php php-mysql
sudo systemctl start mariadb
4) Configure MySQL (set a password for the root user if you want):
sudo mysql_secure_installation
5) Restart Apache
sudo service httpd restart
Once MySQL is set up, you will have to create a database for your WordPress installation.
Following this procedure, you will have Apache, MySQL and PHP installed and running on your Compute Engine VM instance.
Then, you can download the WordPress release of your choice [2], unzip the file and install WordPress by visiting your IP address and the folder where WordPress was downloaded. For example, http://YOUR_PUBLIC_VM_IP_ADDRESS/wordpress.
You will be asked for a database name, the user and password. This will allow WordPress to create the wp-config.php file on your behalf and proceed with the installation.
At this point, you should have WordPress already installed on your Compute Engine VM instance using CentOS 7.
An easier way to install WordPress on Compute Engine VM instances, would be by using the Marketpĺace in the Cloud Platform Console. Go to your Products and Services menu > Marketplace, and search for "Wordpress". You will be presented with many different options to launch WordPress in a Compute Engine VM instance. Nevertheless, it seems that Debian is the deafult OS used for these options.
Links:
[1] https://cloud.google.com/community/tutorials/setting-up-lamp
[2] https://wordpress.org/download/
In my case, I resolved it by looking what actual package name had "httpd" in it.
yum search httpd
It returned httpd.x86_64
Also, later on, when doing sudo service httpd start, I received the notification that PolicyKit1 was needed. So, all up, that command installed the package:
yum install -y httpd.x86_64 polkit-qt.x86_64
service httpd start
I'm attempting to setup Nginx as a load balancer using the StrongLoop Nginx Controller. Nginx will be acting as a load balancer for a StrongLoop LoopBack application hosted by the standalone StrongLoop Process Manager. However, I've been unsuccessful at making the Nginx deployment following the official directions from StrongLoop. Here are the steps I've taken:
Step #1 -- My first step was to install Nginx and the StrongLoop Nginx Controller on an AWS EC2 instance. I launched an EC2 sever (Ubuntu 14.04) to host the load balancer, and attached an Elastic IP to the server. Then I executed the following commands:
$ ssh -i ~/mykey.pem ubuntu#[nginx-ec2-ip-address]
$ sudo apt-get update
$ sudo apt-get install nginx
$ sudo apt-get install build-essential
$ curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
$ sudo apt-get install -y nodejs
$ sudo npm install -g strong-nginx-controller
$ sudo sl-nginx-ctl-install -c 444
Then I opened up port 444 in the security group of the EC2 instance using a Custom TCP Rule.
Step #2 -- My second step was to setup two Loopback application servers. To accomplish this I launched two more EC2 servers (both Ubuntu 14.04) for the application servers, and attached an Elastic IP to each server. Then I ran the following series of commands, once on each application server:
$ ssh -i ~/mykey.pem ubuntu#[application-server-ec2-ip-address]
$ sudo apt-get update
$ sudo apt-get install build-essential
$ curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
$ sudo apt-get install -y nodejs
$ sudo npm install -g strong-pm
$ sudo sl-pm-install
$ sudo /sbin/initctl start strong-pm
Step #3 -- My third step was to deploy the application to each of the application servers. For this I used StrongLoop Arc:
$ cd /path/to/loopback-getting-started-intermediate # my application
$ slc arc
Once in the StrongLoop Arc web console, I built a tar for the application, and deployed it to both application servers. Then in the Arc Process Manager, I connected to both application servers. Once connected, I clicked "load balancer," and entered the Nginx host and port into the form and pressed save. This caused a message to pop up saying "load balancer config saved."
Something strange happened at this point: The fields in StrongLoop Arc where I just typed the settings for the load balancer (host and port) reverted back to the original values the fields had before I started typing. (The original port value was 555 and the original value in the host field was the address of my second application server.)
Don't know what to do next -- This is where I really don't know what to do next. (I tried opening my web browser and navigating to the IP address of the Nginx load balancer, using several different port values. I tried 80, 8080, 3001, and 80, having opened up each in the security group, in an attempt to find the place to which I need to navigate in order to see "load balancing" in action. However, I saw nothing by navigating to each of these places, with the exception of port 80 which served up the "welcome to Nginx page," not what I'm looking for.)
How do I setup Nginx as a load balancer using the StrongLoop Nginx Controller? What's the next step in the process, assuming all of my steps listed are correct.
What I usually do is this:
sudo sl-nginx-ctl-install -c http://0.0.0.0:444
Maybe this can solve your problem.