Release Symfony2 project to the web - symfony

I have almost finished the development of a project developed with Symfony2, and wish to put the project online.
However, I suppose there are a lot of things that need to be done so that everything works ok. I suppose, the dev mode needs to be disabled etc....What needs to be done and how?
What are the most important things to do on a Symfony2 project that will be available to everyone on the web?

I suggest you to use Capifony for deployment. It does a lot of stuff out of the box and you can make it run any custom commands you need. See its documentation for details.
Regarding the dev mode, unless you've removed the IP checks from app_dev.php, you don't have to worry about deploying it. Of course, if you wish, you can tell Capifony to delete it on deployment.

The best way to handle deployment is to create "build" script, which will:
Remove all folders and files with tests from your bundles and vendors.
Remove app_dev.php file
Make sure that app/cache and app/logs are fully writable/readable.
Packs your project into archive (rpm f.e.)
Then, before deployment, you should create tag in your project - so it will mean, that certain version of your application is released (I recommend to follow this git branching model).
Create tag.
Run your build script
Upload archive to host
Unpack
Enjoy your project

Im currently researching the same thing.
The first thing you have to consider is "how professional" you want to deploy. There are a lot of tools you can use:
Continous Integration Server ( e.g. Hudson, Jenkins)
Build Tools (e.g. Phing, Capistrano --> Capifony, Shell scripts)
Versioning Tools (e.g. Git, SVN)
I think the simplest setup is using only a Build tool and i guess you are already using some kind of versioning.
Depending on which tool you use, the setup is different, but I think there are some things you should consider with your application (maybe not all are applicable to your application)
Creating a Tag in your Versioning
Copying the new Code in an folder on production
--> if you are in a new folder you dont need to clear the cache and logs, since these shouldnt be in your versioning the first time.
loading composer (if youre using it)
installing vendors
updating database schema
install assets from your bundles
move symlink from current version to the folder of the new site
These are the things I currently need for my application for production deployment, if you deploy to an test environment you should load fixtures and run your testscripts as well.

One other option that is very well described here is to deploy the Symfony2 application with Apache Ant. Apache Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other.

Related

Best way for deployments with builds, dependency managers and GIT? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm currently working with Git, Git Flow, Gulp and Bower. I'm working on the develop branch and create releases with Git Flow to the master branch. So develop is equal to my local and test environment, release/version to the acceptance environment and the master branch is equal to the production environment. See: http://nvie.com/files/Git-branching-model.pdf. On every environment runs Git, so deploying is nothing more than: git pull origin master.
I've got some dependencies like Bootstrap and Font Awesome handled with Bower and I'm using Gulp to watch .less files for "compiling" to css, minifying css/js, etc.
Now toward to the questions: what should be in my Git repository? Let's say I'm working on a Magento project, it would be overkill to put Magento and all dependencies from Bower in the repository. Currently I'm excluding only the node_modules (for Gulp) and bower_components (contains dependencies) directories, when I run Gulp the .less files from Bootstrap will be "merged" and "compiled" with my project related .less files. The "compiled" .css file is currently included in my repository, else it's not possible to do a deploy with just git pull. To get it working without having the "compiled" version in Git I've to run Gulp on the production server.
What is the best method for not having my platform (Magento or Wordpress) in my Git repository, but keep the possibility to easily update? I came across this solution: http://blog.g-design.net/post/60019471157/managing-and-deploying-wordpress-with-git where they're using Git Submodules. Nice solution, but this way the platform needs to be in a sub directory. Not ideal because I've to "hack" to get it working that way (copy the index.php and change the include paths, etc).
What about plugins/modules? 3th party plugins shouldn't be in my repository too? Only the plugins I've created by myself. But not all 3th party plugins does have a Git repository to use with for example Git Submodules. For Wordpress it's just one directory, so in theory it's possible but for Magento the most plugins aren't just one directory (they have files in the app/code, app/design, skin, etc directories). I've a lot of Wordpress and Magento sites with multiple matching plugins/modules now every plugin/module is in each sites repository.
Should "compiled" files be in a repository? If you ask me: no, but I'm currently doing it so it's easy deploying. Is it general to have Bower and run Gulp on a production server for dependencies and to "compile" on the production server right after a git pull? Continues running a Gulp watcher (like I do locally) on production takes some extra unnecessary resources?
I hope someone can put me in the right direction.
It's difficult to provide a universal answer that works for Magento, WordPress, and other platforms. My answer primarily targets Magento, but I'm sure that similar concepts could be applied to WordPress and other platforms.
It's possible to automatically install Magento as a dependency using Composer with magento-composer-installer. You can either use a public mirror, like this one, or set up your own repository. Once you've installed Magento itself as a dependency, it should be very straight forward to update it, just like any other dependency.
You can use Composer for modules as well (after all, Magento itself is just a bunch of modules and a few scripts to glue everything together). FireGento has a lot of common Magento modules which can be used with Composer out of the box. You can also set up your own repositories to use with Composer (we do this for internal modules that we use for multiple sites).
As for modules that don't have their own repositories, well, you have three options: don't use them (the less modules, the better), create your own repositories for them, or just commit them along with the rest of your codebase.
In an ideal world, your repository should only consist of your own source files. Everything else - whether it's compiled, installed through a dependency manager, or otherwise somehow automatically created - should not be in the repository.
But we don't live in an ideal world - it's painful to run Composer, Bower, Git, Gulp, Sass, and everything else that you're using on each and every environment that you want to deploy to (development machines, testing server(s), staging server(s), production server(s), and so on).
And what if those dependencies have dependencies? What if you're using a Gulp plugin that works well on one of your servers, but fails on another one? What if someone makes a deployment and forgets to run some of the required builds via Gulp? Of course, the answer to these questions is testing and automation. You should be able to click a button (or type a command, for the purists) and have everything automatically deploy - a git pull is issued, the appropriate gulp commands are run, and so on. But unless you have the manpower and manhours to set up such a sophisticated system, it's not worth it.
Overall, we use a combination of the different points above. In the end, we end up committing (almost) everything to the repository - resulting in deployments as simple as git pull or svn up. Of course, configuration files (credentials, .htaccess files, and so on) don't go in the repository, and neither do fingerprinted files (which we manually upload).
Fabrizio Branca has an excellent blog post here which goes through many of the described points in order to clean up and upgrade a Magento setup.

Adobe CQ5 Setup in production

I am not a CQ guy. I have to use CQ5 for one of my project. I have a CAT and a production environment. I have following doubts-
I want to use author instance of my CAT only. Once I publish the content in CAT it should publish in Production also. Is it possible ?
Once I update the build of AdobeCQ in my production say new build, code changes etc- will my content be lost ?
I read somewhere about Content package in cq5. Can I separate content changes and code changes in one CQ5 environment ?
Thanks in advance.
To answer question 1...
This is not a recommended setup, but a common misconception for someone unfamiliar with AEM/CQ5. The "author" and "publish" instances should be part of the same environment. For example you should have a production author, probably behind your firewall, and production publish to serve pages to the public.
Your CAT environment should have the same thing. You want your testing environment to match as closely as possible to your production environment, including web server and dispatcher setup, to ensure quality.
Consider this. You can use one production publish instance, but it's a single point of failure. It's a general best practice to load balance across at least two. Two is sufficient for most websites. If you do this, you'd want to mimic the architecture in CAT.
To answer question 2...
If your code is written, built, and deployed correctly, it should not delete your content. Just make sure you are never deploying anything to /content (to avoid deleting content) and to /libs and most of /etc to avoid overriding platform functionality. AEM/CQ5 is a very open product, so you can do very bad things. But, if you know what not to do you are safe.
Code deployments should typically be done as part of a CRX Content Package, which brings me to...
To answer question 3...
The way we build and deploy code is to have Maven compile the Java, package everything up in a CRX Package, then deploy to the instance using the Package Manager REST API. Adobe provides a Maven Archetype that will facilitate this.
A CRX Package is a file system representation of your content repository, wrapped in what is effectively an annotated Zip file. Your compiled Java code is included in that file system representation, in a folder (to become node) named "config". That compiled Java is an OSGi bundle, which is an annotated JAR. When CRX Package Manager deploys all those nodes to the system, OSGi accepts the bundle, assuming it's valid. This is why you can do "hot" deployments of live, production AEM/CQ5 instances, with very little risk.
So...
This is a very high level answer to some very big topics. I encourage you to do a lot more research before you set this up. There are many good blog posts and documentation pages out there to help you get this set up according to best practice. Good luck!

Better alternative to Web Deploy Projects

I have a solution with a fair few projects, 3 of them web-based (WCF in IIS / MVC site). When the solution builds, it dumps each of the components of this distributed system in a 'Build' folder. Running the 'configurator' part of the whole output will set up the system in the cloud automatically. It's very neat :) However, the Web Deploy Projects are a major pain. They "build" (i.e. deploy) every, single, time I build - even when no changes have been made to their respective projects.
Changed a single line of code? Look forward to waiting around a minute for the 3 web projects to redeploy.
[These projects are VERY straightforward at the moment - two have a single .svc and one .ashx file - the other is an MVC app with ~5 views]
I realise I can change solution configurations to not 'build' them, but I've been doing that and it's very easy to log on the next day and forget about it, and spend a couple of hours tracking down bugs in distributed systems due to something simply having not been built.
Why I use Web Deploy Projects? Well, because I need all pages + binaries from the web project. The build output for the project itself is the 'bin' folder, so no pages. The entire project folder? It has .cs, .csproj and other files I don't want included.
This will be building on build servers eventually, but it's local at the moment. But I want a quick way of getting the actual output files from the web project to my target folder. Any ideas?
Not sure if this will help in your situation, (plug for own project coming up), but I am working on a project to help ease IIS deployments:
https://github.com/twistedtwig/AutomatedDeployments
The idea being you can use config files for IIS (app Pool, applications and websites) to automate the creation and update of sites locally (dev machines) or remotely (test and production machines).
It is still a work in progress but is ready to be used in production systems.
using the package creation as a post build step might get you closer to what you want, (don't believe it includes all the extra files), but that would still build it each time, (although if code hasn't changed it should not rebuild unless you choose rebuild all projects).
In the end I created a utility/tool which, given a project file, XCOPYies the project folder for the web project to a target location, then looks in said project file and deletes anything that doesn't have Build Action set to Content. Very quick and effective.
I know it is still in RC but VS2012 does have a neat feature when doing publish that it detects the changes and publishes only those. Might be something a little deeper down in the build where it does an automatic publish too.
You can take a look to the Octopus project: http://octopusdeploy.com/
Deployment based on nuget packages.

CCNET - build task required? Multiple repositories, one CCNET source section per project

CCNET questions - Here's the scenario:
I've got 10 developers doing local development to a Sitecore installation w/GIT as version control. When done with their feature/fix they push to an integration repository.
I've got CCNET setup for the Sitecore project that points to the remote Integration rep and the local live qa code base. CCNET finds the commits that my developers have made to integration repository and then updates the qa code base repository.
I also have a couple other .Net class lib projects that are managed by CCNET, compiled with their output pointed to the Sitecore bin dir.
The Sitecore installation is merely a result of a build with no compilable aspects. Its a web product with it's own API as well as the ability to integrate custom dll that we create to customize the product.
Questions:
Is CCNET build task required as a condition to execute other activities such as nUnit or robocopy? (the reason I ask this is because a "build" is natively used to compile an app and generate output, whereas, the only reason why we'd want to build is to make sure all dependencies are there and we can jump to unit testing...).
If my developers are NOT pointing to a centralized rep like integration, how would CCNET know where all of their remote GIT repositories are when the config doc only allows one GIT source control section per project?
Per project when I configure the GIT vc specs it asks for the branch that needs to be statically saved to the doc. Does CCNET have the ability to accept different branches dynamically?
There's no need to have an "actual build" in your project - it could consist of any type of tasks inside the tasks element. I have a couple of projects which only copy the files from the repository to an FTP server after deleting some files which shouldn't be published.
I have no experience with GIT but you have a possibility to define multiple source control blocks of any type if you use the multi source control block.
You could use dynamic parameters which allow the user to set their values when triggering the build.

Using Maven to setup a Drupal PHP project

What do I want to achieve?
We are currently working on a PHP project that uses Drupal.
I desperately want to learn how to create a One-step build for the whole project.
Preferably by using something new (for me) that seems very powerful: Maven
Basically I want to automate the following process:
Checkout Drupal from the official CVS repository.
Checkout official 3rd party modules from their respective CVS repositories.
Checkout our custom modules from our mercurial repository.
Copy/move all the modules to the appropriate directory within Drupal.
Checkout and install our custom theme.
Add a custom drupal installation profile.
Create a new MySQL database schema.
If possible, automate the drupal db connection setup.
In the future I would like to run this build on a Hudson (or any other) continues integration server.
Why Maven? (why not Ant or Phing?)
Other than the desire to learn something new (I have used Ant before) I think the dependency management of Maven might work well for the drupal modules.
Do you think this is enough reason to use Maven, even though Maven was not originally intended for PHP projects? I know Ant was not originally used for PHP either, but there are far more examples of people using Ant and PHP together.
BTW I think I will switch to Ant if I can't get Maven to work soon. The procedural style of Ant is just easier for me to understand.
What do I have so far?
I have a pom.xml file, that uses the SCM plugin, to checkout the drupal source.
When I run:
mvn scm:checkout
the source is checked out into a new directory:
target/checkout
When I try:
mvn scm:bootstrap
it complains about the install goal not being defined.
Here is the pom.xml:
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>drupal</artifactId>
<version>1.0</version>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-scm-plugin</artifactId>
<version>1.1</version>
<configuration>
<username>anonymous</username>
<password>anonymous</password>
</configuration>
</plugin>
</plugins>
</build>
<scm>
<connection>scm:cvs:pserver:cvs.drupal.org:/cvs/drupal:drupal</connection>
<developerConnection>scm:cvs:pserver:cvs.drupal.org:/cvs/drupal:drupal</developerConnection>
<tag>DRUPAL-6-12</tag>
<url>http://cvs.drupal.org/viewvc.py/drupal/drupal/?pathrev=DRUPAL-6</url>
</scm>
</project>
Finally, what are my questions?
Is Maven the wrong tool for this?
If no,
How would you do this?
Is it the scm:bootstrap goal that I should be using?
What is the Maven way of moving directories around on the file system?
Should the install goal be used to move the modules into the drupal directory?
Currently all our custom modules are in one mercurial repository. Is it possible to create a pom.xml and checkout each module individually?
Any general advice would be appreciated.
Thanks for your time!
I'm 98% certain that what you really need is Drush Make, which can recursively build Drupal projects, provided they provide their own .make file listing their dependencies. It can download from multiple SCMs, web, patch files, and you can control where they get downloaded. It also support external libs, such as wysiwyg, PHP files, or JS libraries.
See the Open Atrium make file for a sample of what it can do.
Definitely you're not using Maven, here some thoughts:
Maven is a Java build tool and dependency management software with a well-defined lifecycle which goes like this: validate, compile, test, package, integration-test, verify, install, deploy. What you are using is the scm plugin which can stick to any of the phases defined here and perform some actions but unless you make complicated changes in the POM (I haven't heard of anyone doing this) the lifecycle will continue being executed.
Maven also is designed to package JARs, WARs and with the use of some plugins EARs, SARs, RARs (not that RARs) and some other files; you might have to program a new plugin to get the type of packages you expect or use the assembly plugin which will make things more complicated.
Because of the previous points, there is no command for Maven to move the files into an specific directory (not a native one) and you shouldn't invoke install phase to copy the files to any other location than the local repository. What you're doing is like taking a laundry machine and converting it into a blender.
After reading what you want to do with your project I'd suggest you to create a script (shell script or batch script depending on your OS) for doing the job. SVN and CVS has command line tools which can be invoked from inside your build scripts. I guess you opted for Maven, among other reasons, because Hudson and many other Continuous Integration software are well integrated with it but you can use them with scripts too.
If you are comfortable using Ant and you consider using it will ease the building time of your app I think is not as bad ;) (I haven't used Ant for other purposes than Java projects)
The Drush 'module' is a great tool for scripting out things in Drupal. But, beyond that, I think your approach of doing CVS checkouts for each 'build' is a little off base - unless you have -really- good reasons to have every chunk of the project in a separate repository, your best bet is to have fixed checkouts of Drupal core & contributed modules committed to your project's repository. Not only does this take out a dependency on a network connection and the stability of an external server but it allows you to have local modifications of the contributed modules (unfortunately, you're probably going to end up doing this somewhere down the line).
Once you take out the requirement to do checkouts from multiple repositories, you'll probably notice that your task becomes -much- easier, leaving you with some simple MySQL manipulation and writing out a settings.php.
The project http://www.php-maven.org know comes with a build plugin enabling the php world to maven (or the maven world for php projects). Version 2 snapshot can be found in our google groups (news thread available at https://groups.google.com/group/maven-for-php/t/e055e49c89ccb8c5?hl=de).
However this gives you a full control over the project and respects the default maven lifecycle so that the maven commands:
mvn clean
mvn package
mvn deploy
mvn site
will work correctly.
Drupal support may be enabled in version 2.1 where we are focused on frameworks (zend, flow3...) and project types (web, cli, libs...). It would be to much to clearify wha maven is and how it can help you during php development. As Vistor Hugo stated on his early comment Mavens benefits are not only to execute a specific command manually but to embed the whole project structure and the whole project lifecycle via maven.
Since the most php guys did not yet have contact to java and especially maven we are creating tutorials so that everyone has a fairly simple entry in the maven world.
I love maven, although I think it is very java specific as mentioned above.
I had success to handle repeable task with phing. I used in a Zend project to prepare a build or just fasten the normal repetable tasks (eg. clean up db, load db dump).
Phing won't provide you complete lifecycle management as maven, but you can write yourself by hand. You can embed shell script commands to build.xml so you can use everything that you would use in a normal shell script.
I prefer phing over normal shell script because it can handle dependent targets, so if your build.xml contains well designed targets that depend each other, you'll get very useful chains to achive specified goals.
It works for me.
Another great tool for drupal is drush which makes drupal administration scriptable. You can do lots of drupal specific things from console. I think you can call drush commands from phing scripts.

Resources