Deploying a Symfony 2 application in AWS Opsworks - symfony

I want to deploy a php application from a git repository to AWS Opsworks service.
I've setup an App and configured chef cookbooks so it runs the database schema creation, dumping assets etc...
But my application has some user generated files in a sub folder under web root. git repository has a .gitignore file in that folder so an empty folder is there when i run deploy command.
My problem is : after generating some files (by using the site) in that folder, if I run 'deploy' command again 'Opsworks' adds a new release under 'site_name/releases/xxxx' folder and symlink to it from 'site_name/current' folder.
So it makes my previous 'user generated stuff' inaccessible. What is the best solution for this kind of situation?
Thanks in advance for your kind answers.

You have a few different options. Listed below in order of personal preference:
Use Simple Storage Service (S3) to store the files.
Add an Elastic Block Store (EBS) volume to your server and save files to the volume.
Save files to a database (This is something I would not do myself but the option is there.).
When using OpsWorks think of replicable/disposable servers.
What I mean by this is that if you can create one server (call it server A) and then switch to a different one in the same stack (call it server B), the result of using server A or server B should not impact how your application works.
While it may seem like a good idea to save your user generated files in a directory that is common between different versions of your app (every time you deploy a new release directory is generated) when you destroy your server, you run the risk of destroying your files.
Benefits and downsides of using S3?
Benefits:
S3 will give you high redundancy and availability to your files.
S3 is external to your application server so if your server dies or decide to move it to a different region, you can continue using the same s3 bucket.
Application Easy to scale. You could add multiple application servers that read and write files to S3.
Downsides:
You need extra code in you application. You will have to use the AWS API in order to store and retrieve the files. Using the S3 API is not hard but it may require an extra step to get you where you need. Take a look at the "Using an Amazon S3 Bucket" walk through for reference. This is be the code they use to upload the files to the S3 bucket in the example.
Benefits and downsides of using EBS?
Benefits:
EBS is an "external hard drive" that you can easily mount to your machine using the OpsWorks Resource Manager.
EBS volumes can be backed-up and restored.
It may be the fastest option to implement and integrate to your application.
Downsides:
You need to assign it to an instance before it is running.
It could be time consuming to move from server A to server B (downtime may be required).
You can not scale your application horizontally. While you can create copies of the EBS and assign them to different instances, the EBS will not be shared.
Downside of using a database?
Just do a google search on "storing files in database"
Take a look at Storing Images in DB - Yea or Nay?
My preferred choice would be to use S3, but ultimately this is your decision.
Good luck!
EDIT:
Take a look at this repository opsworks-chef-cookbooks it contains some recipes to deploy Symfony2 application on OpsWorks. I have been using it for over a year and works quite well.

Use Chef templates, and use them in a recipe in the opsworks deploy lifecycle event.

Related

applicationHost.xdt in App Service Plan instances

The Issue
I am currently in the process of integrating a pre-rendering service for SEO optimization, however we use an Azure App Service Plan to scale up or down when necessary.
One of the steps for setting up the proper configuration requires placing an applicationHost.xdt file in the /site/ directory, which is one level above the /site/wwwroot directory where the application itself gets deployed to.
What steps should I take in order to have the applicationHost.xdt file persist to new instances spawned by the scaling process?
Steps I have taken to solve the issue
So far I have been Googling a lot, but haven't succeeded in finding a lot of documentation on using an applicationHost.xdt file in combination with an Azure App Service Plan.
I am able to upload the file to an instance manually, however I have assumed that when we then scale up to more instances the manually uploaded file will not be present on the new instance(s).
Etcetera
We are using Prerender.io as pre-rendering service.
Should there be an easier to set-up & similarly priced service available, we would be open to suggestions as we are in an exploratory phase regarding pre-rendering.
Suppose this won't be a problem, cause all files under azure app are shared between all your instances. You could check it in this Kudu wiki:Persisted files. And in my test all instances will keep the file.
About upload the applicationHost.xdt, you don't have to do it manually, there is a IIS Manager Site Extension to lets you very easily create XDT files. And it will provide some sample XDT's for you.

Storing Code & Application Files When Auto Scaling On AWS

I have a simple wordpress website which I am migrating to AWS requiring autoscaling.
At the moment its on a shared server runs a service which saves files xml files on the server and then the site reads those files for its own devices using php.
If I set up an instance and put my code there, the service will presumably write to instance storage which I read is a bad option in case of an instance termination so I have two questions
1) In terms of auto scaling - When I am scaling out is the code that I put on my first instance automatically duplicated and available and on the new instance?
2) Where should my service save the xmls to so it would be accessible to all instances when scaled up?
Thank you for any replies, apologies if my questions seem elementary I am new to using AWS
When using Auto Scaling, code and files put on one EC2 instance will not automatically propagate to the other EC2 instances. You need to set that up yourself, or use additional configuration software to manage that.
Also, when using Auto Scaling, you need to be prepared for your EC2 instances to be terminated at any time. So do not keep anything on your EC2 instance that is not safe to lose permanently. Everything critical to keep should be kept off your EC2 instances (code, database, uploaded content).
Code should either be directly baked into your AMI image that's used for Auto Scaling, or it should "get" the code it needs on startup. There are many ways to handle this. I'd recommend looking at using Elastic Beanstalk to manage your EC2 instances and application deployment.
To handle your XML files (assets), I would recommend saving the dynamic assets (non-code) to something like Amazon S3. This would allow all EC2 instances to access the files.
You can use Amazon Elastic File System(EFS) to store the XML files. To configure it
Create a EFS and mount automatically upon initializing the EC2 instance using the user data section of the launch configuration.
Copy the XML files to the mounted directory.
EFS will make sure the files are instantly available across instances.
A

Transfer content from one Alfresco instance to another (same version) on another server

What would be the best /better way to transfer repository content from one Alfresco (enterprise edition) to another instance running on a different server. Currently we copy the entire Alfresco database & file system under alf_data but that needs a down time on the servers.
I would require a mechanism without down time & the repository data be copied from one instance to another. Is there any way this is possible ?
In addition to Heiko's solution, you might be interested in:
The out-of-the-box replication service, which wouldn't be good for replicating your entire repo, but can be used for replicating a handful of nodes from one server to another.
A solution from Parashift which allows one- and two-way replication of nodes between servers.
An Alfresco presentation on using Apache Camel and Apache Kafka to replicate nodes between servers. This is available through Alfresco's professional services organization, but it may make it into the product at some point. Or you could use it as inspiration to write your own solution.
What is your intention? A standby system, a real copy, an external private cloud with a subset of data?
If you just need a 100% clone you can script backup & restore without downtime on the source server. Downtime is limited to the db and index restore on the target system. Your script shouldn't copy life data from solr index - use the backup done by the solr backup job instead. Depending on the database you use online db backup shouldn't be an issue.
Our Alfresco Virtual Appliance has preconfigured scripts and jobs for this task to start an additional alfresco instance from snapshot backups without copying the contentstore (we call this Alfresco Time Machine).
If your aim is an external private cloud server or a road warrior solution ecm4u has a commercial alfresco module to sync very efficient a subset of modified nodes including metadata/types/aspects (list of types and aspects needs to be defined). This sync provides a REST interface for automation and also manual execution from alfresco's admin console. We support mix of alfreso versions and editions. At the moment this sync is implemented as a unidirectional sync but could be extended as a bidirectional sync.
I recently did this task of installing 2 alfresco instances on my local running on 2 different ports.
While performing some tasks, I realized that 2 instances having same Repository ID is creating issues.
I was able to change the repository ID of one of them following below steps:
update alfresco-global.properties:
db.name="Add new DB Name"
(Alfresco will create a db db.name mentioned here while initializing)
and restart the server
If you are still facing issues, try deleting solr indexes under alf-data folder.

Is it possible and safe to create a RamDisk on an existing EC2 instance? If possible, how to do that?

I have an EC2 instance that currently hold our web site, and we are thinking about pointing our "Temporary ASP.NET Files" Compilation Folder to a RamDisk on EC2.
Not sure if it's a good idea and couldn't find much info around.
Ramdisks are normally used for speeding up access to commonly used files. In your case I would assume that speed is not an issue, but that you want to save space on your main drive. In this case I would use the instance storage from EC2 - in contrast to the EBS backed main storage, this one will be flushed on reboot.
Here is a posting on Serverfault that shows you how to mount the instance storage. Also have a look at the instance storage in the AWS documentation. Then you would have to move your Temporary ASP.NET Files folder to one of the instance-backed disks.

Asp.net Deployment Strategies

I have heard lots of strategies for deploying asp.net applications, but am not quite sure which strategy is best for my needs.
I have an asp.net 4 application. I have separate development/staging/production environments (different web.configs). I also need to manage sql server changes. It is possible that I may have more than 1 DB server and more than 1 app server to push changes to. Ideally, I would like to hit a button and say "deploy to staging" or "deploy to production" and it brings deploys code/db/config files to the correct servers. Ideally, I'd like there to be some process to rollback in case of a bad release as well.
I have heard xcopy/robocopy strategies, MSDeploy (now called Web Deploy?) strategies, and building MSI packages to deploy.
Which of these seems like the best fit for this type of need?
Method #1
If you have some time to spend, I suggest using CruiseControl.NET. For a while at least, the stackoverflow team used this for deployments.
Method #2
As far as copy strategies go, I recommend using a combination of 7zip and ftp for application and media. 7Zip is nice, as it allows you to exclude file types (web.config), folders, and file types, and allows you to compress different files differently. Example, there is no point in compressing a PNG. Note, this does a full deployment every time. So, if you have large media folders, I'd handle them separately.
As for the database, I believe you will have the best of luck using SQL Compare by Redgate. They are commercial applications, but they are very, very good. They've been positively mentioned multiple times on the stackoverflow podcasts.
Build a CMD file on the development/build server that generates the master 7zip file and FTP's it to a dedicated folder on the staging (or production) server. I end up with multiple calls to 7zip feeding files into a single 7zip file, using different compression methods for each batch.
Build a CMD file for each staging or production server. This file will execute proper file backups, and extract the 7zip file to the proper location.
A deployment to staging will go like this:
Execute your 7zip-prep command file which will trigger FTP upload to a dedicated FTP folder on the staging server
Execute DB changes against the staging database server via scripts generated by SQL Compare
Execute 7zip extraction command file on the staging server
This is the method that I use. I have not invested the time to master CruiseControl.NET, but when I do I will probably use it instead, at least for larger applications. It's not a one-click deployment, but it allows for multiple efficient deployments per day (as I've been doing off and on for few years now). The 7zip method is nice, because once you have your command files, you can copy them and use them for new projects very quickly.

Resources