Is it possible and safe to create a RamDisk on an existing EC2 instance? If possible, how to do that? - asp.net

I have an EC2 instance that currently hold our web site, and we are thinking about pointing our "Temporary ASP.NET Files" Compilation Folder to a RamDisk on EC2.
Not sure if it's a good idea and couldn't find much info around.

Ramdisks are normally used for speeding up access to commonly used files. In your case I would assume that speed is not an issue, but that you want to save space on your main drive. In this case I would use the instance storage from EC2 - in contrast to the EBS backed main storage, this one will be flushed on reboot.
Here is a posting on Serverfault that shows you how to mount the instance storage. Also have a look at the instance storage in the AWS documentation. Then you would have to move your Temporary ASP.NET Files folder to one of the instance-backed disks.

Related

Storing Code & Application Files When Auto Scaling On AWS

I have a simple wordpress website which I am migrating to AWS requiring autoscaling.
At the moment its on a shared server runs a service which saves files xml files on the server and then the site reads those files for its own devices using php.
If I set up an instance and put my code there, the service will presumably write to instance storage which I read is a bad option in case of an instance termination so I have two questions
1) In terms of auto scaling - When I am scaling out is the code that I put on my first instance automatically duplicated and available and on the new instance?
2) Where should my service save the xmls to so it would be accessible to all instances when scaled up?
Thank you for any replies, apologies if my questions seem elementary I am new to using AWS
When using Auto Scaling, code and files put on one EC2 instance will not automatically propagate to the other EC2 instances. You need to set that up yourself, or use additional configuration software to manage that.
Also, when using Auto Scaling, you need to be prepared for your EC2 instances to be terminated at any time. So do not keep anything on your EC2 instance that is not safe to lose permanently. Everything critical to keep should be kept off your EC2 instances (code, database, uploaded content).
Code should either be directly baked into your AMI image that's used for Auto Scaling, or it should "get" the code it needs on startup. There are many ways to handle this. I'd recommend looking at using Elastic Beanstalk to manage your EC2 instances and application deployment.
To handle your XML files (assets), I would recommend saving the dynamic assets (non-code) to something like Amazon S3. This would allow all EC2 instances to access the files.
You can use Amazon Elastic File System(EFS) to store the XML files. To configure it
Create a EFS and mount automatically upon initializing the EC2 instance using the user data section of the launch configuration.
Copy the XML files to the mounted directory.
EFS will make sure the files are instantly available across instances.
A

Deploying a Symfony 2 application in AWS Opsworks

I want to deploy a php application from a git repository to AWS Opsworks service.
I've setup an App and configured chef cookbooks so it runs the database schema creation, dumping assets etc...
But my application has some user generated files in a sub folder under web root. git repository has a .gitignore file in that folder so an empty folder is there when i run deploy command.
My problem is : after generating some files (by using the site) in that folder, if I run 'deploy' command again 'Opsworks' adds a new release under 'site_name/releases/xxxx' folder and symlink to it from 'site_name/current' folder.
So it makes my previous 'user generated stuff' inaccessible. What is the best solution for this kind of situation?
Thanks in advance for your kind answers.
You have a few different options. Listed below in order of personal preference:
Use Simple Storage Service (S3) to store the files.
Add an Elastic Block Store (EBS) volume to your server and save files to the volume.
Save files to a database (This is something I would not do myself but the option is there.).
When using OpsWorks think of replicable/disposable servers.
What I mean by this is that if you can create one server (call it server A) and then switch to a different one in the same stack (call it server B), the result of using server A or server B should not impact how your application works.
While it may seem like a good idea to save your user generated files in a directory that is common between different versions of your app (every time you deploy a new release directory is generated) when you destroy your server, you run the risk of destroying your files.
Benefits and downsides of using S3?
Benefits:
S3 will give you high redundancy and availability to your files.
S3 is external to your application server so if your server dies or decide to move it to a different region, you can continue using the same s3 bucket.
Application Easy to scale. You could add multiple application servers that read and write files to S3.
Downsides:
You need extra code in you application. You will have to use the AWS API in order to store and retrieve the files. Using the S3 API is not hard but it may require an extra step to get you where you need. Take a look at the "Using an Amazon S3 Bucket" walk through for reference. This is be the code they use to upload the files to the S3 bucket in the example.
Benefits and downsides of using EBS?
Benefits:
EBS is an "external hard drive" that you can easily mount to your machine using the OpsWorks Resource Manager.
EBS volumes can be backed-up and restored.
It may be the fastest option to implement and integrate to your application.
Downsides:
You need to assign it to an instance before it is running.
It could be time consuming to move from server A to server B (downtime may be required).
You can not scale your application horizontally. While you can create copies of the EBS and assign them to different instances, the EBS will not be shared.
Downside of using a database?
Just do a google search on "storing files in database"
Take a look at Storing Images in DB - Yea or Nay?
My preferred choice would be to use S3, but ultimately this is your decision.
Good luck!
EDIT:
Take a look at this repository opsworks-chef-cookbooks it contains some recipes to deploy Symfony2 application on OpsWorks. I have been using it for over a year and works quite well.
Use Chef templates, and use them in a recipe in the opsworks deploy lifecycle event.

The right way to create multiple instances for Load Balancer (EC2)

I installed Wordpress using EC2. I created a Load Balancer by creating image (AMI) then adding both Wordpress1 and Wordpress2 on Load Balancer. But I'm still getting database error and have to restart the instances. If I'd like to make 4 instances as Load balancer, are the steps the same? because I saw a "Number of Instances" option when I launched an AMI. Default value is 1. I'm not sure if I should enter 3 or 4 to create multiple instances in one click.
Also, if I update on Wordpress1 instance, will the updates show if the domain will load Wordpress2 instance?
If you want to launch multiple instances and a database etc, you should consider using
AWS CloudFormation. CloudFormation is just a big json string that contains the configuration of your environment, including the servers, autoscaling, access, register with the loadbalancer, etc.
See http://aws.amazon.com/en/cloudformation/ for more details.
There is already an example template for wordpress including a database and autoscaling groups (example wordpress template)
However like datasage mentioned you will need to make adjustments to wordpress to make it working in a multiserver environment.
The "problem" with multiserver environments is that if you upload a file or in your case upgrade wordpress, it will only happen on one server, which could be terminated at any point. Furthermore the upgrade could contain changes in the database structure and then its getting complicated.
If you are building something in the cloud you should always keep in mind that every service you build, in you case the frontend webservers and the database should be allowed to fail without interrupting your service.
Another point is, that you should avoid doing stuff by hand, automation is the key.
An environment where you need to link your server by hand to a loadbalancer is not very useful in the cloud where servers are continuously terminated, rebooted and exchanged.
For you webservers you can use "autoscaling groups" to get this behavior.
If you are using autoscaling groups and a server is terminated or considered unhealthy, a new one will be started automatically and registered with the loadbalancer as soon as it is considered as healthy.
For your database amazon offers for rds multi AZ environments which provide a automatic failover.
Applying upgrades in the cloud can be a tricky and there are different ways to do this. for example using a shared NFS mount with the code base, git deployments or the way you already started: creating a new AMI for every upgrade and then replacing the servers. There are a lot options and they all have their benefits and drawbacks.
As far as i understand you use-case the cloud is maybe not the right choice at the moment.
Normally hosting a small business in the cloud is much more expensive than using a single server. You will only save money if you need like 20 servers in the evening and only 2 or 3 for the rest of the day. Of course there are a lot more points to consider but that would be to much.
Autoscaling in ec2 is horizontal scaling. Which means that instances are added as your infrastructure scales up. This in contrast to vertical scaling where the a single instance is given more resources.
In order to use this effectively, each instance cannot store data that may be needed by other instances. The most common requirement is the database which will need to exist on its own instance outside of the autoscaled instances. You could use RDS for this.
Wordpress also stores file uploads, plugins and themes within the wp-content folder within the wordpress install. By default, if you upload a file, it will be stored on one instance but not any of the others. You could store everything on an NFS volume shared by one of the instances, or you could try a plugin like this: http://wordpress.org/plugins/wp2cloud-wordpress-to-cloud/

Handling Wordpress media files on EC2/S3 with auto scaling

I'm working on a WordPress deployment configuration on Amazon AWS. I have WordPress running on Apache on an Ubuntu EC2 instance. I'm using W3 Total Cache for caching and to serve user-uploaded media files from an S3 bucket. A load balancer distributes traffic to two EC2 instances with auto scaling to handle heavy loads.
The problem is that user-uploaded media files are stored locally in wp-content/uploads/ and then synced to the S3 bucket. This means that the media files are inconsistent between EC2 instances.
Here are the approaches I'm considering:
Use a WordPress plugin to upload the media files directly to S3 without storing them locally. The problem is that the only plugins I've found (this and this) are buggy and poorly maintained. I'd rather not spend hours fixing one of these myself. It's also not clear whether they integrate cleanly with W3 Total Cache (which I also want to use for its other caching tools).
Have a master instance where users access the admin interface and upload media files. All media files would be stored locally on this instance (and synced to S3 via W3 Total Cache). Auto scaling would deploy slave instances with no local file storage.
Make all EC2 instances identical and point wp-content/uploads/ to a separate EBS volume. All instances would share media files.
Use rsync to copy media files between running EC2 instances.
Is there a clear winner? Are there other approaches I should think about?
You might consider looking at something like s3fs (http://code.google.com/p/s3fs/). This allow you to mount your S3 bucket as a volume on your server instances. You could simply have the code to mount the volume executed on instance start-up.
s3fs also has the ability to use local (ephermal) directories as a cache to the s3fs directory so as to improve performance.

Where to put a new ASP.NET website?

Where's the best place for a production asp.net application? I mean a place that we need less permission manipulation on folders and probably the experts choice.
under C:\inetpub\wwwroot or C:\inetpub or elswhere ?
In development/test phases I usually put it under C:\inetpub\wwwroot and create a new web application without setting bindings. But on production version with binding I'm not sure where's the right place.
You can put it anywhere you like, they key thing is to ensure that the app pool it is running under is set to run as a low privileged user (like NT AUTHORITY\NETWORK SERVICE), then ensure that user has Read (and possibly Browse if you want it) permissions on the folder you put your web app in. Very seldom (if ever) will the user need Write or Modify permissions on the folder.
and on a new system I had a lot of problem to modify batch files, setting permissions
Setting permissions should not be a problem, you should set the same basic permissions i mentioned above for the user you want to run the app pool as. You can use PowerShell or WMI for this, and you should use the same permissions no matter what folder you install in to.
You could always wrap all this up into an installer, then it can be as simple as hitting Next.. Next... Finish... in an installer wizard to set up your website on any machine. Doing this in an installer also gives you some certainty that nothing has been missed.
Personally I have a 'Development' folder on my D: drive which is then subdivided into different categories depending on the work. I generally don't use inetpub directory and any permission issues I come across I just set directly onto the relevant folder within my own development structure.
On production environments I've used in the past, we've generally done the same thing. Mainly to help backup scenarios really, but also because there's no strict need to use the default IIS directories - you're free to structure things how you like.
Personally, I always create a new folder (in the root of a drive) called WebSites. I then make sure it has the appropriate permissions for the website process(es) (aka App Pools).
eg.
C:\
|_WebSites
|_www.Foo.com
|_www.Bah.com
It also makes it easier to manage because you don't have to hunt through the folder structures to find any/all websites.
But technically, it can be (more or less) anywhere - just needs to have the correct permissions set.
Bonus Answer
I also remove the Default Website from IIS .. which in effect means I can also delete c:\inetpub\wwwroot.
You can put the website any where on the server hard disk, Just make sure it is a secure folder and also I recommend to don't put it in the same OS drive, in case it failed and you needed to formate it.
C:\inetpub\wwwroot and C:\inetpub are just the default places nothing more.
Really depends on how the production server is configured and how operations likes to operate over there. Typically we setup a second "data" drive on servers for a few reasons:
a) Back in the old days, there were a lot of cannonical attacks where the attacker would try to navigate from c:\inetpub to c:\winnt\cmd.exe. Putting things on a different drive prevented this sort of thing.
b) Recovery -- if the OS gets hosed, you can pretty easily reinstall/reimage or move the data disks to another box and get things stood up fast.
c) Typically is lots easier to do things like swap the non-os disk in case you need more disk space or faster disks or whatever.
Basically, off the OS drive is a good idea. Though virtualization and modern deployment tools make lots of this matter less.

Resources