I am not a CQ guy. I have to use CQ5 for one of my project. I have a CAT and a production environment. I have following doubts-
I want to use author instance of my CAT only. Once I publish the content in CAT it should publish in Production also. Is it possible ?
Once I update the build of AdobeCQ in my production say new build, code changes etc- will my content be lost ?
I read somewhere about Content package in cq5. Can I separate content changes and code changes in one CQ5 environment ?
Thanks in advance.
To answer question 1...
This is not a recommended setup, but a common misconception for someone unfamiliar with AEM/CQ5. The "author" and "publish" instances should be part of the same environment. For example you should have a production author, probably behind your firewall, and production publish to serve pages to the public.
Your CAT environment should have the same thing. You want your testing environment to match as closely as possible to your production environment, including web server and dispatcher setup, to ensure quality.
Consider this. You can use one production publish instance, but it's a single point of failure. It's a general best practice to load balance across at least two. Two is sufficient for most websites. If you do this, you'd want to mimic the architecture in CAT.
To answer question 2...
If your code is written, built, and deployed correctly, it should not delete your content. Just make sure you are never deploying anything to /content (to avoid deleting content) and to /libs and most of /etc to avoid overriding platform functionality. AEM/CQ5 is a very open product, so you can do very bad things. But, if you know what not to do you are safe.
Code deployments should typically be done as part of a CRX Content Package, which brings me to...
To answer question 3...
The way we build and deploy code is to have Maven compile the Java, package everything up in a CRX Package, then deploy to the instance using the Package Manager REST API. Adobe provides a Maven Archetype that will facilitate this.
A CRX Package is a file system representation of your content repository, wrapped in what is effectively an annotated Zip file. Your compiled Java code is included in that file system representation, in a folder (to become node) named "config". That compiled Java is an OSGi bundle, which is an annotated JAR. When CRX Package Manager deploys all those nodes to the system, OSGi accepts the bundle, assuming it's valid. This is why you can do "hot" deployments of live, production AEM/CQ5 instances, with very little risk.
So...
This is a very high level answer to some very big topics. I encourage you to do a lot more research before you set this up. There are many good blog posts and documentation pages out there to help you get this set up according to best practice. Good luck!
Related
When I was learning how to use premake, I remember reading a wiki page or perhaps a forum post somewhere (I wish I could find the original link) suggesting that project files generated by your premake scripts may ultimately be run on different machines than the one you're running premake on. So, I took this idea and designed premake scripts accordingly to replace the existing autotools/VS/Xcode project files in an open-source project I contribute to. This project uses a variety of third-party libraries, some mandatory and some optional.
What I started to discover, through both my own experience and through feedback from other developers, is that it's pretty tough to generate generic project files (gmake files, especially) that will work on other machines, especially when it comes to finding the location of system libraries to link to. It also seems like you're completely giving up your ability to auto-detect the state of things on the build machine and enable/disable optional build settings in your project accordingly, and in lieu of errors you could have displayed during configuration in a user-friendly format (missing dependencies, etc.), you have to rely on cryptic compiler errors to tell users that they're missing something.
My question is for those have experience using premake in a production environment: is it a reasonable goal to be able to transfer premake-generated project files to other machines and still have them work, or should you design your premake scripts around the assumption that users will run premake locally because build environments are so diverse?
For simple or self-contained projects, certainly—the official Premake releases ship with pre-built project files, for example. But for more complex projects it generally makes more sense to just ship the Premake scripts (i.e. premake5.lua) and ask developers to download and run Premake locally to generate the final project files, for the reasons you specified.
I have almost finished the development of a project developed with Symfony2, and wish to put the project online.
However, I suppose there are a lot of things that need to be done so that everything works ok. I suppose, the dev mode needs to be disabled etc....What needs to be done and how?
What are the most important things to do on a Symfony2 project that will be available to everyone on the web?
I suggest you to use Capifony for deployment. It does a lot of stuff out of the box and you can make it run any custom commands you need. See its documentation for details.
Regarding the dev mode, unless you've removed the IP checks from app_dev.php, you don't have to worry about deploying it. Of course, if you wish, you can tell Capifony to delete it on deployment.
The best way to handle deployment is to create "build" script, which will:
Remove all folders and files with tests from your bundles and vendors.
Remove app_dev.php file
Make sure that app/cache and app/logs are fully writable/readable.
Packs your project into archive (rpm f.e.)
Then, before deployment, you should create tag in your project - so it will mean, that certain version of your application is released (I recommend to follow this git branching model).
Create tag.
Run your build script
Upload archive to host
Unpack
Enjoy your project
Im currently researching the same thing.
The first thing you have to consider is "how professional" you want to deploy. There are a lot of tools you can use:
Continous Integration Server ( e.g. Hudson, Jenkins)
Build Tools (e.g. Phing, Capistrano --> Capifony, Shell scripts)
Versioning Tools (e.g. Git, SVN)
I think the simplest setup is using only a Build tool and i guess you are already using some kind of versioning.
Depending on which tool you use, the setup is different, but I think there are some things you should consider with your application (maybe not all are applicable to your application)
Creating a Tag in your Versioning
Copying the new Code in an folder on production
--> if you are in a new folder you dont need to clear the cache and logs, since these shouldnt be in your versioning the first time.
loading composer (if youre using it)
installing vendors
updating database schema
install assets from your bundles
move symlink from current version to the folder of the new site
These are the things I currently need for my application for production deployment, if you deploy to an test environment you should load fixtures and run your testscripts as well.
One other option that is very well described here is to deploy the Symfony2 application with Apache Ant. Apache Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other.
Here's our problem, we are a Flex shop that uses .NET for the server side logic. We use subversion for our source control and subeclipse in Flex Builder but are still quite new to using source control let alone subversion. Branching and merging seems to work very well on the .NET side but we are running into issues on the Flex side because of the final swf being built on our local machine.
The question is, what does a usual workflow look like for working with Flex and SVN? Particularly, how do you branch and where do you build?
Personally, I keep the Flash/Flex source code in a separate SVN repository that is away from what is deployed to any sort of web server. That way I can create branches and tags specifically for my Flash/Flex application. I also tend to publish any SWF's directly into my local copy of the deployment repository. It does not make sense to me to keep a published SWF under version control unless its part of the what is deployed to the server. I don't like to keep committing an SWF into my Flash source code repository because it takes up unnecessary space and all the source code should represent the latest version, not the resulting SWF.
You'd probably want to branch your project alongside your .Net project so your flex releases are consistent with your server logic.
We use a directory structure like this
+server-side-app
--trunk
--tags
--branches
+flex-client-app
--trunk
--tags
--branches
I would recommend something like that for yourself.
I agree with Matt W. At AKQA we have svn locations four our source and assets. We set up an svn ignore for the bin folders of a project. That way we aren't checking any swfs which means when we update we don't get someone elses swfs or output files.
A good bet is to look into continuous integration with something like cruise control. We build our output on the server which generates all of the files into one location on the server. There are loads of other benefits of continous integration and it's well worth having
What are the strategies for versioning of a web application/ website?
I notice that here in the Beta there is an svn revision number in the footer and that's ideal for an application that uses svn over one repository. But what if you use externals or a different source control application that versions separate files?
It seems easy for a Desktop app, but I can't seem to find a suitable way of versioning for an asp.net web application.
NB I'm not sure that I have been totally clear with my question.
What I want to know is how to build and auto increment a version number for an asp.net application.
I'm not interested in how to link it with svn.
I think what you are looking for is something like this: How to auto-increment assembly version using a custom MSBuild task. It's a little old but I think it will work.
For my big apps I just use a incrementing version number id (1.0, 1.1, ...) that i store in a comment of the main file (usually index.php).
For just websites I usually just have a revision number (1,2,3,...).
I have a tendency to stick with basic integers at first (1,2,3), moving onto rational numbers (2.1, 3.13) when things get bigger...
Tried using fruit at one point, that works well for a small office. Oh, the 'banana' release? looks over in the corner "yeah... that's getting pretty old now..."
Unfortunately, confusion started to set in when the development team grew, is it an Orange, or Mandarin, or Tangelo? It looks ok. What do you mean "rotten on the inside?"
... but in all honesty. Setup a separate repository as a master, development goes on in various repositories. For every scheduled release everything is checked into the master repository so that you can quickly roll back when something goes wrong.
(I'm assuming dev/test/production are all separate servers, and dev is never allowed to touch production or the master repository....)
I maintain a system of web applications with various components that live in separate SVN repos. To be able to version track the system as a whole, I have another SVN repo which contains all other repos as external references. It also contains install / setup script(s) to deploy the whole thing. With that setup, the SVN revision number of the "metarepository" could possibly be used for versioning the complete system.
In another case, I include the SVN revision via SVN keywords in a class file that serves no other purpose (to avoid the risk of keyword substitution breaking my code). The class in that file contains a string variable that is manipulated by SVN and parsed by a class method.
An inconvenience with both approaches is that the revision number is not automatically updated by changes in the externals (approach 1) or the rest of the code (approach 2).
During internal development, I'm using milestone numbers (M1, M2, M3...). After release, I'll probably just update dates ("the January 2009 update").
As part of improvements to our build process, we are currently debating whether we should have separate project/solution files on our CI production environment from our local development environments.
The reason this has come about is because of reference problems we experienced in our previous project. On a frequent basis people would mistakenly add a reference to an assembly in the wrong location, which would mean it would work okay on their local environment, but might break on someone else's or on the build machine.
Also, the reference paths are in the csproj.user files which means these must be committed to source control, so everyone has to share these same settings.
So we are thinking about having separate projects and solutions on our CI server, so that when we do a build it uses these projects rather than local development ones.
It has obvious drawbacks such as an overhead to maintaining these separate files and the associated process that would need to be defined and followed, but it has benefits in that we would be in more control over EXACTLY what happens in the production environment.
What I haven't been able to find is anything on this subject - can't believe we are the only people to think about this - so all thoughts are welcome.
I know it's anachronistic. But the single best way I've found to handle the references issue is to have a folder mapped to a drive letter such as R: and then all projects build into or copy output into that folder also. Then all references are R:\SomeFile.dll etc. This gets you around the problem that sometimes references are added by absolute path and sometimes they are added relatively. (there's something to do with "HintPath" which I can't really remember)
The nice thing then, is that you can still use the same solution files on your build server. Which to be honest is an absolute must as you lose the certainty that what is being built on the dev machine is the same as on the build server otherwise.
In our largest project (a system comprising of many applications) we have the following structure
/3rdPartyAssemblies /App1 /App2 /App3 /.....
All external assemblies are added to 3rdPartyAssemblies/Vendor/Version/...
We have a CoreBuild.sln file which acts as an MSBuild script for all of the assemblies that are shared to ensure building in dependancy order (ie, make sure App1.Interfaces is built before App2 as App2 has a reference to App1.Interfaces).
All inter-application references target the /bin folder (we don't use bin/debug and bin/release, just bin, this way the references remain the same and we just change the release configuration depending on the build target).
Cruise Control builds the core solution for any dependencies before building any other app, and because the 3rdPartAssemblies folder is present on the server we ensure developer machines and build server have the same development layout.
Usually, you would be creating Build projects/scripts in some form or another for your Production, and so putting together another Solution file doesn't come in the picture.
It would be easier to train everyone to use project references, and create a directory under the project file structure for external assembly references. This way everyone follows the same environment.
We have changed our project structure (making use of SVN Externals) where each project is now completely self-contained. That is, any references never go outwith the project directory (for example, if Project A references ASM X, then ASM X exists within a subfolder of ProjectA)
I suspect that this should go some way towards helping solve some of our problems, but I can still see some advantages of having more control over the build projects.
#David - believe it or not this is what we actually have just now, and yet it's still causing us problems!
We're making some changes though, which are forced upon us due to moving to TeamCity and multiple build agents - so we can't have references to directories outwith the current project, as I've mentioned in my previous answer.
Look at the Externals section of this link to see what I mean - http://www.dummzeuch.de/delphi/subversion/english.html
I would strongly recommend against this.
Reference paths aren't only stored in the .user file. A hint path is stored in the project file itself. You should never have to check a .user file into source control.
Let there be one set of (okay, possibly versioned) solution/project files which all developers use, and the Release configurations of which are what you're ultimately building in production. Having separate project files is going to cause confusion down the road, when some project setting is tweaked, not carried across, and slipped into production.
You might also check this out:
http://www.objectsharp.com/cs/blogs/barry/archive/2004/10/29/988.aspx
http://bytes.com/forum/thread268546.html