Storing Code & Application Files When Auto Scaling On AWS - wordpress

I have a simple wordpress website which I am migrating to AWS requiring autoscaling.
At the moment its on a shared server runs a service which saves files xml files on the server and then the site reads those files for its own devices using php.
If I set up an instance and put my code there, the service will presumably write to instance storage which I read is a bad option in case of an instance termination so I have two questions
1) In terms of auto scaling - When I am scaling out is the code that I put on my first instance automatically duplicated and available and on the new instance?
2) Where should my service save the xmls to so it would be accessible to all instances when scaled up?
Thank you for any replies, apologies if my questions seem elementary I am new to using AWS

When using Auto Scaling, code and files put on one EC2 instance will not automatically propagate to the other EC2 instances. You need to set that up yourself, or use additional configuration software to manage that.
Also, when using Auto Scaling, you need to be prepared for your EC2 instances to be terminated at any time. So do not keep anything on your EC2 instance that is not safe to lose permanently. Everything critical to keep should be kept off your EC2 instances (code, database, uploaded content).
Code should either be directly baked into your AMI image that's used for Auto Scaling, or it should "get" the code it needs on startup. There are many ways to handle this. I'd recommend looking at using Elastic Beanstalk to manage your EC2 instances and application deployment.
To handle your XML files (assets), I would recommend saving the dynamic assets (non-code) to something like Amazon S3. This would allow all EC2 instances to access the files.

You can use Amazon Elastic File System(EFS) to store the XML files. To configure it
Create a EFS and mount automatically upon initializing the EC2 instance using the user data section of the launch configuration.
Copy the XML files to the mounted directory.
EFS will make sure the files are instantly available across instances.
A

Related

applicationHost.xdt in App Service Plan instances

The Issue
I am currently in the process of integrating a pre-rendering service for SEO optimization, however we use an Azure App Service Plan to scale up or down when necessary.
One of the steps for setting up the proper configuration requires placing an applicationHost.xdt file in the /site/ directory, which is one level above the /site/wwwroot directory where the application itself gets deployed to.
What steps should I take in order to have the applicationHost.xdt file persist to new instances spawned by the scaling process?
Steps I have taken to solve the issue
So far I have been Googling a lot, but haven't succeeded in finding a lot of documentation on using an applicationHost.xdt file in combination with an Azure App Service Plan.
I am able to upload the file to an instance manually, however I have assumed that when we then scale up to more instances the manually uploaded file will not be present on the new instance(s).
Etcetera
We are using Prerender.io as pre-rendering service.
Should there be an easier to set-up & similarly priced service available, we would be open to suggestions as we are in an exploratory phase regarding pre-rendering.
Suppose this won't be a problem, cause all files under azure app are shared between all your instances. You could check it in this Kudu wiki:Persisted files. And in my test all instances will keep the file.
About upload the applicationHost.xdt, you don't have to do it manually, there is a IIS Manager Site Extension to lets you very easily create XDT files. And it will provide some sample XDT's for you.

How to preserve themes/plugins when autoscaling Wordpress instances

Forgive me if this is an obvious question, but I am trying to figure out what the best way is of handling autoscaling of EC2 instances running WordPress such that their themes and plugins (along with their associated configurations) are preserved.
I am already able to decouple the data and content layers via RDS and S3, respectively, but I am struggling how to preserve the themes and plugins through an EC2 instance autoscaling event.
My EC2 instances are configured as follows:
EC2 bootstrap script installs WordPress onto blank Amazon Linux AMI
EC2 runs behind an ELB
Database is on RDS
Web content is on S3 (using W3 Total Cache plugin)
Plugins/themes are installed on the local EC2 filesystem
To preserve themes and/or plugins through an EC2 autoscaling event, I could:
Install the themes/plugins I need first, then upload the /wp-content/plugins and /wp-content/themes dirs to S3, downloading them automatically each time an EC2 instance restarts via the bootstrap script. DISADVANTAGES: need to update S3 every time i make a config change, not all plugins are installed neatly within the /themes subdir, and changes to one instance don't flow to all (need to restart the cluster everytime a change is made).
Install the themes/plugins I need first, then take an AMI snapshot of the entire instance. Use this AMI as a template when launching new instances. DISADVANTAGES: need to update the AMI every time i make a config change (seems tiresome), and changes to one instance don't flow to all.
Create a symbolic link out of the /wp-content/plugins and /wp-content/themes dirs, pointing to an EFS filesystem that is mounted on all EC2 instances. DISADVANTAGES: EFS can be a bit slow, not all plugins are installed entirely within the /themes subdir.
Anybody have any experience with this? Am I over-engineering this? Perhaps the themes/plugin files don't really change much throughout the lifespan of your WordPress blog (ie, once you're set up, you don't really find yourself changing them much), in which case maybe Option 1 (zip to S3 and download via bootstrap script) is the best option for me, and Option 3 (EFS) is over-engineered.
I would love to get your take on this if you have experience with this conundrum!
Thanks in advance!
You can take a look at this link:
https://cloudonaut.io/wordpress-on-aws-smooth-and-pain-free/
It provides a CloudFormation template that sets up an ASG backed by EFS, installs wordpress and some plugins there, uses RDS for their database, sets up CloudFront as CDN and a few other goodies.
I tweaked their template for our use case and added an extra ASG with spot instances, replaced all the VPC stuff with references to my VPC template and tweaked the LaunchConfig so it automatically sets up the S3 Offload plugin with a bucket created on my template. It also automatically sets up the certificate for the ELB and a few other goodies.
I thought that would be the end and that I would be able to forget about WordPress and leave another team to work on it. Wrong. They complained it felt sluggish and some plugins failed to install with timeouts (and installing them manually using wp-cli took way too long, one of them up to 2 minutes and a half).
So here my 2 cents: set up RDS and CloudFront, use a reserved instance for wordpress and offload your static assets to S3 using a plugin. Once the site is completely set up bake an AMI or take a snapshot of your EBS and set alarms so in case the instance breaks down you can quickly spin up a new one with your AMI/snapshot.
Either that or have your dev team baking AMIs from a dev environment so you can set up an ASG in your production environment with those.
I got to a point where I believe trying to set up WordPress for a non-dev team (meaning, they can install/upload plugins and themes from the browser) in an ASG is just madness without support from a dev team (baking AMIs, updating stacks). You could, of course, automate all of this. You could, of course, develop a full new site using anything else. Your call.
/rant

Deploying a Symfony 2 application in AWS Opsworks

I want to deploy a php application from a git repository to AWS Opsworks service.
I've setup an App and configured chef cookbooks so it runs the database schema creation, dumping assets etc...
But my application has some user generated files in a sub folder under web root. git repository has a .gitignore file in that folder so an empty folder is there when i run deploy command.
My problem is : after generating some files (by using the site) in that folder, if I run 'deploy' command again 'Opsworks' adds a new release under 'site_name/releases/xxxx' folder and symlink to it from 'site_name/current' folder.
So it makes my previous 'user generated stuff' inaccessible. What is the best solution for this kind of situation?
Thanks in advance for your kind answers.
You have a few different options. Listed below in order of personal preference:
Use Simple Storage Service (S3) to store the files.
Add an Elastic Block Store (EBS) volume to your server and save files to the volume.
Save files to a database (This is something I would not do myself but the option is there.).
When using OpsWorks think of replicable/disposable servers.
What I mean by this is that if you can create one server (call it server A) and then switch to a different one in the same stack (call it server B), the result of using server A or server B should not impact how your application works.
While it may seem like a good idea to save your user generated files in a directory that is common between different versions of your app (every time you deploy a new release directory is generated) when you destroy your server, you run the risk of destroying your files.
Benefits and downsides of using S3?
Benefits:
S3 will give you high redundancy and availability to your files.
S3 is external to your application server so if your server dies or decide to move it to a different region, you can continue using the same s3 bucket.
Application Easy to scale. You could add multiple application servers that read and write files to S3.
Downsides:
You need extra code in you application. You will have to use the AWS API in order to store and retrieve the files. Using the S3 API is not hard but it may require an extra step to get you where you need. Take a look at the "Using an Amazon S3 Bucket" walk through for reference. This is be the code they use to upload the files to the S3 bucket in the example.
Benefits and downsides of using EBS?
Benefits:
EBS is an "external hard drive" that you can easily mount to your machine using the OpsWorks Resource Manager.
EBS volumes can be backed-up and restored.
It may be the fastest option to implement and integrate to your application.
Downsides:
You need to assign it to an instance before it is running.
It could be time consuming to move from server A to server B (downtime may be required).
You can not scale your application horizontally. While you can create copies of the EBS and assign them to different instances, the EBS will not be shared.
Downside of using a database?
Just do a google search on "storing files in database"
Take a look at Storing Images in DB - Yea or Nay?
My preferred choice would be to use S3, but ultimately this is your decision.
Good luck!
EDIT:
Take a look at this repository opsworks-chef-cookbooks it contains some recipes to deploy Symfony2 application on OpsWorks. I have been using it for over a year and works quite well.
Use Chef templates, and use them in a recipe in the opsworks deploy lifecycle event.

The right way to create multiple instances for Load Balancer (EC2)

I installed Wordpress using EC2. I created a Load Balancer by creating image (AMI) then adding both Wordpress1 and Wordpress2 on Load Balancer. But I'm still getting database error and have to restart the instances. If I'd like to make 4 instances as Load balancer, are the steps the same? because I saw a "Number of Instances" option when I launched an AMI. Default value is 1. I'm not sure if I should enter 3 or 4 to create multiple instances in one click.
Also, if I update on Wordpress1 instance, will the updates show if the domain will load Wordpress2 instance?
If you want to launch multiple instances and a database etc, you should consider using
AWS CloudFormation. CloudFormation is just a big json string that contains the configuration of your environment, including the servers, autoscaling, access, register with the loadbalancer, etc.
See http://aws.amazon.com/en/cloudformation/ for more details.
There is already an example template for wordpress including a database and autoscaling groups (example wordpress template)
However like datasage mentioned you will need to make adjustments to wordpress to make it working in a multiserver environment.
The "problem" with multiserver environments is that if you upload a file or in your case upgrade wordpress, it will only happen on one server, which could be terminated at any point. Furthermore the upgrade could contain changes in the database structure and then its getting complicated.
If you are building something in the cloud you should always keep in mind that every service you build, in you case the frontend webservers and the database should be allowed to fail without interrupting your service.
Another point is, that you should avoid doing stuff by hand, automation is the key.
An environment where you need to link your server by hand to a loadbalancer is not very useful in the cloud where servers are continuously terminated, rebooted and exchanged.
For you webservers you can use "autoscaling groups" to get this behavior.
If you are using autoscaling groups and a server is terminated or considered unhealthy, a new one will be started automatically and registered with the loadbalancer as soon as it is considered as healthy.
For your database amazon offers for rds multi AZ environments which provide a automatic failover.
Applying upgrades in the cloud can be a tricky and there are different ways to do this. for example using a shared NFS mount with the code base, git deployments or the way you already started: creating a new AMI for every upgrade and then replacing the servers. There are a lot options and they all have their benefits and drawbacks.
As far as i understand you use-case the cloud is maybe not the right choice at the moment.
Normally hosting a small business in the cloud is much more expensive than using a single server. You will only save money if you need like 20 servers in the evening and only 2 or 3 for the rest of the day. Of course there are a lot more points to consider but that would be to much.
Autoscaling in ec2 is horizontal scaling. Which means that instances are added as your infrastructure scales up. This in contrast to vertical scaling where the a single instance is given more resources.
In order to use this effectively, each instance cannot store data that may be needed by other instances. The most common requirement is the database which will need to exist on its own instance outside of the autoscaled instances. You could use RDS for this.
Wordpress also stores file uploads, plugins and themes within the wp-content folder within the wordpress install. By default, if you upload a file, it will be stored on one instance but not any of the others. You could store everything on an NFS volume shared by one of the instances, or you could try a plugin like this: http://wordpress.org/plugins/wp2cloud-wordpress-to-cloud/

Is it possible and safe to create a RamDisk on an existing EC2 instance? If possible, how to do that?

I have an EC2 instance that currently hold our web site, and we are thinking about pointing our "Temporary ASP.NET Files" Compilation Folder to a RamDisk on EC2.
Not sure if it's a good idea and couldn't find much info around.
Ramdisks are normally used for speeding up access to commonly used files. In your case I would assume that speed is not an issue, but that you want to save space on your main drive. In this case I would use the instance storage from EC2 - in contrast to the EBS backed main storage, this one will be flushed on reboot.
Here is a posting on Serverfault that shows you how to mount the instance storage. Also have a look at the instance storage in the AWS documentation. Then you would have to move your Temporary ASP.NET Files folder to one of the instance-backed disks.

Resources