Openshift V3 Wordpress on free account - not enough storage - wordpress

I host a Wordpress site on my Openshift 2 free account and need to migrate to 3 before 30th September before V2 is switched off. I have tried to create a wordpress site following this blog - https://blog.openshift.com/migrating-wordpress-openshift-3/ but have hit a road block --
I add a sql database but cant make it smaller than 1 Gb. I then can't add persistent storage because it says I am using my storage limit. Therefore I can't have themes, plugins, images etc in persistent storage.
Am I missing something or is it no longer possible to host Wordpress on an Openshift free account?
Thansks!

The blog post uses a procedure which requires two separate persistent volumes, one for the database and one for Wordpress. To use a single persistent volume shared between the two is a bit more complicated and involves running both the database and Wordpress in the same pod. This can't easily be done through the web console.
In principal, if using the command line, you would start by using a command similar to:
oc new-app php~https://github.com/WordPress/WordPress \
mysql --group=php+mysql -e MYSQL_USER=wordpress -e \
MYSQL_PASSWORD=wordpress -e MYSQL_DATABASE=wordpress
and then go on to attach a different sub directory of the one persistent volume to each application in the pod.
So technically it can probably be done, but a bit more fiddly to setup.
Do be aware that the Starter tier is not intended for sites which need to be running all the time. Applications will be subject to resource hibernation as explained in:
https://www.openshift.com/pricing/index.html

Related

Do I need a VM instance for each WordPress instance on Google Cloud?

I've been playing with Google Cloud, trying to figure out the most cost-effective way to host multiple low-traffic WordPress websites.
With Bitnami, it seems to me that for every new WordPress instance, I'm having to provision a new virtual machine. I also tried Google click-to-deploy WordPress setup, and it forced me to provision a cluster with 3 VM's.
Each of the new VM's cost money, so I'm wondering if there's a way to do something similar to shared Linux hosting, where I could host multiple WordPress instances on a single Virtual Machine.
You can use the Bitnami Wordpress multisite stack, which allows multiple sites to run on one server.
In you don't want to use the Bitnami Multisite solution, you can also install multiple WordPress apps in the same server without installing multiple database or web servers. Bitnami provides modules to install on top of an installed stack (normally LAMP stack) and the WordPress module allows you set the name of the blog you want to create.
The module can be downloaded from here but you will need to run the following commands in the instance (these commands will download the current version)
wget https://bitnami.com/redirect/to/269995/bitnami-wordpress-4.9.8-0-module-linux-x64-installer.run
chmod a+x bitnami-wordpress-4.9.8-0-module-linux-x64-installer.run
sudo ./bitnami-wordpress-VERSION-module-linux-x64-installer.run --wordpress_instance_name NEW_BLOG_NAME
Once you have the module installed, you will be able to access it through http://localhost/NEW_BLOG_NAME.
More info in the Bitnami documentation
https://docs.bitnami.com/installer/apps/wordpress/configuration/install-several-wordpress-modules/
I found the following post which has explains how it might be done.
http://designhack.slashlab.net/en/how-to-setup-multiple-wordpress-without-multisite-ft-bitnami/
Make sure to back up important data before you start.
There is the chance to set up multiple websites using bitnami, but i recommend to kept separate every site to avoid database confusion, and to extend the functionalities of every website.
https://bitnami.com/stack/wordpress-multisite
Im using a single VM per domain to avoid confusion with DNS.

CD-CM setup with merge replication

I am in the process of trying to make the publishing process quicker and simpler for one of our customers, on their sitecore based website. Through research I stumbled upon Merge Replication which might solve some of our issues, but it introduces other issues.
I need your help and guidance to figure out which way is the best!
We've got a CD & CM setup, with 1 CM server which has its own SQL instance. 2 CD servers with a SQL instance each.
At the moment I have the current setup:
CM (Master-, web- and core-database) Web is shown only internally on a secure admin url for the site, this works like a preview site.
CD1 & CD2 are the servers for visiting users, these each have a publishing-target in Sitecore.
When we deploy a release:
1. Deploy new code for CM. Publish templates and potential content changes for Sitecore to Web. Verify and authenticate that everything is correct.
2. Take out CD1 of the Load Balancer, deploy new code for CD1, publish templates and potential changes to Sitecore, verify and authenticate, then put server back into the load balancer.
3. Repeat step 2 for CD2.
4. Deployment done
this process is working OK for us now, we are up and running at all time without downtime on the site.
We've got a few issues with the current setup:
Our search (Elastic search) are being populated when CM publishes to Web, so atm there is an issue with elastic search potentially can have data which is not yet published to the CD servers.
When publishing, the editors could forget to publish to one of the CD servers, which would cause inconsistencies between the servers, which we would like to avoid.
Everything needs to be published multiple times for same environment, takes up time.
Editors do not know what a CD server is, they just want to have a “preview” and “Live” publishing target.
I've looked into the Merge Replication for Sitecore, and actually also have it working in a test environment. The advantage we want from this is that we only have two publishing targets:
Preview (CM server preview database)
Live (CM server web database, which then gets replicated out into the CD servers web databases)
The Elastic search instance will relay on data from CM’s web database, which is live data.
We have can have a Elastic search instance running on preview as well.
The issue here is, that now I can't deploy only for CD1 or CD2, when doing deployment. What if I have breaking changes towards Sitecore? The site will break if I publish new breaking Sitecore items to a server which hasn't been deployed to yet?
How can I get the best of these two worlds? Any?
Do you have an ES for each CD?
If you publish data to a single CD and have a shared ES you will get inconsistency either way.
Else I would make make changes to the publish dialog where only an admin/developer could see the CD servers individually.
Example of normal user:
Preview
Live
Example of admin user:
Preview
Live
CD1
CD2

How to preserve themes/plugins when autoscaling Wordpress instances

Forgive me if this is an obvious question, but I am trying to figure out what the best way is of handling autoscaling of EC2 instances running WordPress such that their themes and plugins (along with their associated configurations) are preserved.
I am already able to decouple the data and content layers via RDS and S3, respectively, but I am struggling how to preserve the themes and plugins through an EC2 instance autoscaling event.
My EC2 instances are configured as follows:
EC2 bootstrap script installs WordPress onto blank Amazon Linux AMI
EC2 runs behind an ELB
Database is on RDS
Web content is on S3 (using W3 Total Cache plugin)
Plugins/themes are installed on the local EC2 filesystem
To preserve themes and/or plugins through an EC2 autoscaling event, I could:
Install the themes/plugins I need first, then upload the /wp-content/plugins and /wp-content/themes dirs to S3, downloading them automatically each time an EC2 instance restarts via the bootstrap script. DISADVANTAGES: need to update S3 every time i make a config change, not all plugins are installed neatly within the /themes subdir, and changes to one instance don't flow to all (need to restart the cluster everytime a change is made).
Install the themes/plugins I need first, then take an AMI snapshot of the entire instance. Use this AMI as a template when launching new instances. DISADVANTAGES: need to update the AMI every time i make a config change (seems tiresome), and changes to one instance don't flow to all.
Create a symbolic link out of the /wp-content/plugins and /wp-content/themes dirs, pointing to an EFS filesystem that is mounted on all EC2 instances. DISADVANTAGES: EFS can be a bit slow, not all plugins are installed entirely within the /themes subdir.
Anybody have any experience with this? Am I over-engineering this? Perhaps the themes/plugin files don't really change much throughout the lifespan of your WordPress blog (ie, once you're set up, you don't really find yourself changing them much), in which case maybe Option 1 (zip to S3 and download via bootstrap script) is the best option for me, and Option 3 (EFS) is over-engineered.
I would love to get your take on this if you have experience with this conundrum!
Thanks in advance!
You can take a look at this link:
https://cloudonaut.io/wordpress-on-aws-smooth-and-pain-free/
It provides a CloudFormation template that sets up an ASG backed by EFS, installs wordpress and some plugins there, uses RDS for their database, sets up CloudFront as CDN and a few other goodies.
I tweaked their template for our use case and added an extra ASG with spot instances, replaced all the VPC stuff with references to my VPC template and tweaked the LaunchConfig so it automatically sets up the S3 Offload plugin with a bucket created on my template. It also automatically sets up the certificate for the ELB and a few other goodies.
I thought that would be the end and that I would be able to forget about WordPress and leave another team to work on it. Wrong. They complained it felt sluggish and some plugins failed to install with timeouts (and installing them manually using wp-cli took way too long, one of them up to 2 minutes and a half).
So here my 2 cents: set up RDS and CloudFront, use a reserved instance for wordpress and offload your static assets to S3 using a plugin. Once the site is completely set up bake an AMI or take a snapshot of your EBS and set alarms so in case the instance breaks down you can quickly spin up a new one with your AMI/snapshot.
Either that or have your dev team baking AMIs from a dev environment so you can set up an ASG in your production environment with those.
I got to a point where I believe trying to set up WordPress for a non-dev team (meaning, they can install/upload plugins and themes from the browser) in an ASG is just madness without support from a dev team (baking AMIs, updating stacks). You could, of course, automate all of this. You could, of course, develop a full new site using anything else. Your call.
/rant

Scalling Wordpress on Windows Azure:

I'm running a Wordpress multisite which in short periods every week experience a big number of users requiring more CPU + RAM.
I therefore wish to make use of Azure autoscale to turn on more instances if the demand are there, however is it possible to make a setup where the different instances share same storage and database? And if yes how could it be done?
It is supported by out-of-the-box:
Goto "Web Sites" and add a new website from "Gallery".
Select "WordPress"
Follow the rest of the wizard.
The wizard allows you to create a MySQL database. The website runs as a cluster and uses a database which also runs on a cluster of servers (hosted by ClearDB).

The right way to create multiple instances for Load Balancer (EC2)

I installed Wordpress using EC2. I created a Load Balancer by creating image (AMI) then adding both Wordpress1 and Wordpress2 on Load Balancer. But I'm still getting database error and have to restart the instances. If I'd like to make 4 instances as Load balancer, are the steps the same? because I saw a "Number of Instances" option when I launched an AMI. Default value is 1. I'm not sure if I should enter 3 or 4 to create multiple instances in one click.
Also, if I update on Wordpress1 instance, will the updates show if the domain will load Wordpress2 instance?
If you want to launch multiple instances and a database etc, you should consider using
AWS CloudFormation. CloudFormation is just a big json string that contains the configuration of your environment, including the servers, autoscaling, access, register with the loadbalancer, etc.
See http://aws.amazon.com/en/cloudformation/ for more details.
There is already an example template for wordpress including a database and autoscaling groups (example wordpress template)
However like datasage mentioned you will need to make adjustments to wordpress to make it working in a multiserver environment.
The "problem" with multiserver environments is that if you upload a file or in your case upgrade wordpress, it will only happen on one server, which could be terminated at any point. Furthermore the upgrade could contain changes in the database structure and then its getting complicated.
If you are building something in the cloud you should always keep in mind that every service you build, in you case the frontend webservers and the database should be allowed to fail without interrupting your service.
Another point is, that you should avoid doing stuff by hand, automation is the key.
An environment where you need to link your server by hand to a loadbalancer is not very useful in the cloud where servers are continuously terminated, rebooted and exchanged.
For you webservers you can use "autoscaling groups" to get this behavior.
If you are using autoscaling groups and a server is terminated or considered unhealthy, a new one will be started automatically and registered with the loadbalancer as soon as it is considered as healthy.
For your database amazon offers for rds multi AZ environments which provide a automatic failover.
Applying upgrades in the cloud can be a tricky and there are different ways to do this. for example using a shared NFS mount with the code base, git deployments or the way you already started: creating a new AMI for every upgrade and then replacing the servers. There are a lot options and they all have their benefits and drawbacks.
As far as i understand you use-case the cloud is maybe not the right choice at the moment.
Normally hosting a small business in the cloud is much more expensive than using a single server. You will only save money if you need like 20 servers in the evening and only 2 or 3 for the rest of the day. Of course there are a lot more points to consider but that would be to much.
Autoscaling in ec2 is horizontal scaling. Which means that instances are added as your infrastructure scales up. This in contrast to vertical scaling where the a single instance is given more resources.
In order to use this effectively, each instance cannot store data that may be needed by other instances. The most common requirement is the database which will need to exist on its own instance outside of the autoscaled instances. You could use RDS for this.
Wordpress also stores file uploads, plugins and themes within the wp-content folder within the wordpress install. By default, if you upload a file, it will be stored on one instance but not any of the others. You could store everything on an NFS volume shared by one of the instances, or you could try a plugin like this: http://wordpress.org/plugins/wp2cloud-wordpress-to-cloud/

Resources