Synchronizing Plone 4 sites - plone

I'm using Plone 4 for my sites and I was wondering if there is a way to synchronize two plone sites i.e. be able to synchronize my development site with my production site.
I have looked at Zsyncer product and it appears it is no longer maintained. Besides, the last version is not compatible with Plone 4.
I am thinking of writing a custom script that will handle exporting of the data.fs files and the src files as explained in these two articles:
Copying a remote site database
Copying a Plone site
Is there a better way of synchronizing two plone sites as described by my use case above?

For keeping the code synchronized, you want collective.hostout
For the database, use collective.recipe.backup - you could probably also use hostout to import the backups

Not sure if this solution will fit all your needs, but I use DemoStorage which is build-in to ZODB since version 3.9 (Plone 4 use it).
DemoStorage you have to setup on development instance and use Data.fs from production. All changes will be stored in memory or in separated file (it depends how you configure it), so changes in dev will not be visible on production. If you have both instances on the same server you can use Data.fs directly (without copying it), so it will be always synchronized.
To configure it you have to modify buildout. See: https://pypi.python.org/pypi/plone.recipe.zope2instance#advanced-options
When on prod and on dev transactions changes the same objects (it happens occasionally) DemoStorage can show errors, Than you have to just reboot dev instance (if you use memory change storage) or remove file with changes and than reboot.

Related

How to migrate repository data from Alfresco 4 to 5?

I'm working on migration from Alfresco 4 to 5 and applying any add-ons on Alfresco 4 for the purpose is not applicable. Database used for the both versions are different from each other. I have tried with ACP files and it is very time consuming. Is there a size limitation on ACP files? What other methods can be used?
Use Standard Upgrade Procedure
What is your main intention? "Just" doing an upgrade from 4 to 5?
In that case the robust, easy way would be to:
Install required modules having custom models in your target sytstem (or if you customized models in the extension path than you have to copy that config)
backup and restore the alfresco repo database to your new (5.x) system. If your target system uses a different db product (not just a different version) you need to manage the db migration using db specific migration tools. It is no alternative to use Alfresco export/import.
sync alf_data/contentstore to your new system (make sure the db dump
is always older or you need to do an offline sync)
During startup Alfresco recognizes that the repo needs to be upgraded and does everything. Check the catalina.out for any output during migration.
If you need a subset from your previous system it is much easier to delete the content afterwards (don't forget to purge the trash and you should configure the cleaner job not to wait 14 days).
Some words concerning ACP
It is a nice tooling to export single directories but unfortunately it is limited:
no support accross Alfresco versions (exactly your case)
no support for site metadata / no site export/import (maybe it is working after the changes in 4.x when putting site metadata in nodes but I suppose nobody tested this)
must run in one transaction. So hard limits depend on your hardware / JVM configuration but I wouldn't recommend to export/import more than some thousand nodes at once.
If you really need to use export/import a huge number of documents you should use the import/export in a separate java process which means your Alfresco needs to be shut down. s. https://wiki.alfresco.com/wiki/Export_and_Import#Export_Tool
ACP does have a file limit (I can't remember the actual number), but we've had problems with ones below that limit too. We've given up on this approach in favor of using Alfresco bulk import tool.
One big advantage this tool has, it can continue a failed import from the point of failure, no need to delete the partially imported batch and start all over again. It can also update files as needed, something ACP method can't (would fail with DuplicateChildNameNotAllowed).

How to upgrade (merge) web.config with web deploy (msdeploy)?

I'm trying to set up a deployment chain for some of our ASP.NET applications. The tool of choice is Web Deploy (msdeploy) - for now. Unfortunately I'm stuck on a problem.
A high level overview of the chain is thus:
Web developer creates the code and checks it in SVN;
Buildserver sees the update and builds the msdeploy .zip package of the website;
The .zip package is automatically put inside our installer and sent to various clients;
The clients run the installer on their webserver(-s);
The installer uses msdeploy internally to deploy the .zip package and create a new website or upgrade an existing one.
Msdeploy makes it easy to deploy a new instance, but I'm stumped about how to perform an "upgrade" install. The main problem is the web.config file. Each client will most certainly have made some customizations there to suit their specific environment. The installer itself offers to set some more critical parameters at the first-time installation (achieved by msdeploy's parameter mechanism), but they can do others by hand.
On the other hand, we developers also occasionally make changes to web.config, adding some new settings or removing obsolete ones. So I can't just tell msdeploy to ignore the file entirely. I need some kind of advanced XML modification mechanism. It could be a script that the developers maintain, but then it needs to be run ONLY at upgrades, not new installs.
I've no idea how to accomplish this.
Besides that, sometimes there's also some completely weird upgrade logic. For example, the application comes with our company logo, but some clients have replaced that .png file to show their own logo. Recently we needed to update the logo - but only for clients that hadn't replaced it with their own.
Similarly, there might be some cache folders that might need to be cleaned at SOME upgrades but not at others. Or folders with user content that may not be touched (but come with default content at the initial installation). Etc.
How do you normally achieve this dual behavior for msdeploy packages? Do I really need to create 2 distinct packages for every application?
Suggestion from personal experience:
Isolate customisations
Your customers should have the ability to customise their set up and the best way is to provide them with something like an override file. That way you install the new package and follow by superimposing your customer's customisations on top of your standard setup. If its a brand new install then there will be nothing to superimpose.
> top-level --
> standard files |
images | This will never be touched or changed by customer
settings.txt |
__
> customer files --
images | Customer hacks this to their heart's content
settings.txt_override |
--
Yes, this does mean that some kind of merging process needs to happen and there needs to be some script that does that but this approach has several advantages.
For settings that suddenly become redundant just issue a warning to that effect
If a customer has their own logo provide the ability to specify this in the override file
The message is clear to customers. Stay off standard files.
If customers request more customisable settings then write the default if it does not exist into the override file during upgrades.
Vilx, in answer to your question, the logic for knowing whether it is an upgrade or not must be contained in the script itself.
To run an upgrade script before installation
msdeploy -verb:sync -source:contentPath="C:\Test1" -dest:contentPath="C:\Test2" -preSync:runcommand="c:\UpgradeScript.bat"
Or to run an upgrade script after installation
msdeploy -verb:sync -source:contentPath="C:\Test1" -dest:contentPath="C:\Test2" -postSync:runcommand="c:\UpgradeScript.bat"
More info here
As to how you know its an upgrade your script could check for a text file called "version.txt" and if it exists the upgrade bat script will run. Version to be contained within the text file. Bit basic but it should work.
This also has the added advantage of giving you the ability of more elegantly merging customer's custom settings between versions as you know which properties could be overriden for that particular version.
There are some general suggestions (not specific to msdeploy), but I hope that helps:
I think you'll need to provide several installers anyway: for the initial setup and for each version-to-version upgrade.
I would suggest to let your clients to merge the config files themselves. You could just provide them either detailed desciption of waht was added/changed/removed, and/or include the utility that simplifies the merge. Maybe this and this links will give you some pointers.
as for merging the replaced logos, other client's customization, I think the best approach would be to support branding your application. I mean - move all branding details to the place where your new/upgrade installers won't touch that.
as for the rest of the adjustments made by your clients, they do that on their own risk, so the only help you could provide them is to include the detailed list of changes (maybe even the list of changed files since the previous version) and the How-To article about merging the sources with tools like Araxis Merge or similar
Or.. you could create a utility and include it to the installer, which will try to do all the tricky merging stuff on client's machine. I would not recommend this way as it requires a lot of efforts/resources to maintain.
One more thing: you could focus on backup-ing the previous client copy before upgrade. So even client will have troubles with upgrading - that will be always possible to roll back. The only thing here for you is to provide a good feedback channel which your clients can use to shoot their troubles. This feedback will allow you to figure out what the troubles your clients have and how to make their upgrade process more comfortable.
I would build on what the above have said, but I would do it with transformations, and strict documentation about who configures what. The way you have it now relies on customer intervention against a config that is mission critical to the app deploy process.
Create three config file areas. One for development, one for the "production generic" build, and one that is an empty template for the customer to edit.
The development instance should be self explanatory. This is the transform that takes the production generic template and creates a web config for your development server. (it sounds like you are shooting for a CI type process here)
The "production generic" transform should set the app up for a hypothetically perfect instance of the app. This is what the install would look like if the architect had his way.
The customer transform is used by the customers to set up the web config as required to meet their own needs. Write some documentation and see what happens. Edit the docs as you help customers through the process.
It that what you were looking for? Thoughts?

How should I set up a Development to Production Plone workflow?

I worked with Plone 2 several years ago, but its workflow and security systems (not to mention the new theming engine) seem the best fit for a current project. Plone 4 looks much better from a usability standpoint, but in its drive not to force a particular setup on users, it has become a bit of a maze as to how best to host it on our public-facing Ubuntu servers. I’ve got Plone installed using the Unified Installer (in /home/plone/Plone/zinstance), and have been running it in the foreground for development, but I’m now going to need to set things up for the move to making this website live.
I want to achieve a development instance on the project server that we can use for our own customisations (with additional local instances on our desktops for testing before checking into the development instance), then a site that our client can test things on before they go live, and finally the live website.
In particular:
I’m assuming that the live site will be based on ZEO, while testing and development will be standalone; does that make sense?
At present, everything lives in /home/plone/Plone, while all our other (generally PHP CMS) sites are hosted from /srv/<domain>/www; everything for the domain is then backed up in one go. What’s the best way to layout the different Plone instances to fit into this?
What’s the best way to move our changes to buildouts and products through the cycle from development to testing and then to a live site? We currently have a basic deployment process using Mercurial for other systems.
Current best practice is to use buildout to define a fully reproducible deployment.
Generally, you use a production.cfg and a development.cfg configuration that share configuration via inclusion, to tweak the deployment to the specific needs of either environment; you can expand on this model as needed. A big project might have a staging.cfg to deploy new features to a test environment first, for example.
Whether or not you use ZEO in development depends entirely on your use cases. Most development setups certainly do not need it, but in a large deployment that includes async workers you may find you need a ZEO server when developing anyway. Again, buildout will make switching your development environment around for changing needs easier, especially when combined with supervisord to manage the various daemons used in the setup.
For the biggest deployment I manage, we use a combination of subversion and git, but only for historic reasons. This is all being moved to one git repository. In the end, the repository will hold the complete buildout with it's various configurations, development.cfg, staging.cfg and per-production-machine configuration files (one for each server in the production cluster). For a 3-machine ZEO setup, that'd be one zeo.cfg and instances-1.cfg and instances-2.cfg, for example. The core development eggs are stored in the same repository in src/.
Development takes place exclusively in per-feature and per-issue branches. Once complete, a branch is merged into the staging branch, and the staging server is updated and restarted. Once the customer has signed off on each merged branch, and a roll-out has been planned, we merge the approved branches to the main branch and tag a release. The production cluster machines are then switched to that new tag and restarted.
We also use Jenkins for continuous integration testing; both the main and the staging branches are put through their tests at least once a day, letting us catch problems early.
All daemons are managed by a dedicated supervisord, which is started via a crontab entry (#reboot), not the native OS init.d structure. This way we can manage the running daemons entirely with buildout. The buildouts even generate logrotate configurations and munin monitoring plugins; these are symlinked into the OS locations as needed.
Because a buildout is entirely self-contained, it doesn't matter where you set up the production buildout on a machine. Do pick something consistent, document it and stick to that. Make sure you have sufficient disk space for your daemon needs and otherwise not worry too much about it.
ZEO or not - for production or development - depends on personal needs and preferences. Many devs use ZEO for production and development and many do not. ZEO is needed for scaling (multiple app servers)
Nobody cares...install Plone where you want it or need it...re-running buildout after moving an installation fixes relocation issues. It is your system, not ours...you decide.
Usually buildout configurations live in a repository and can be pulled on the production server from the repository....otherwise you need to copy over the related buildout files yourself some. Typical installation works like this:
run virtualenv
checkout the buildout configuration from git/subversion
run bootstrap
run buildout

Duplicate a Drupal installation from one server to another

I have been developing a Drupal 6 site on my PC using XAMPP. I'm done now, and everything looks peachy.
Problem is, I need to put all my content (including custom modules and themes) up onto a staging server which only has a fresh Drupal 6 install on it. I can't imagine having to set up all my custom content types and whatnot all over again on the staging server.
So I ask, how does one go about doing what I need to do? Which is essentially duplicating my Drupal install from my PC, to the staging server.
The staging server is running Linux, and I develop on a Windows PC, if that helps.
Thanks in advance.
Copy up all the files from development to live, and mysqldump your database and run that on the live server. Then all you have to do is change the settings.php file to point at the right database, if for some reason 'localhost' is not also your mysql database.
The quickest solution is probably the backup_migrate module. It is only a tool to copy your database. You could also use phpmyadmin or similar instead if you wanted. The backup_migrate module do have some good defaults settings as to which tables to skip (like cache tables). All the settings etc. that is not defined in code is stored in your db. So you only need to copy the db to be set. You can choose to exclude some tables, like the node or user table if you don't want to bring over your test data.
If you don't use subversion, then you gotta manually copy the files (rsync, scp, whatever) and the db (mysqldump).
what we usually do is have a hierarchy of independent subversion repos as follows:
core
sites/all/modules/contributed
sites/all/modules/custom
sites/all/themes/ (we develop our own and don't use contributed themes)
sites/all/libraries
then we use the svn:externals properties so that if you check out "core" you get every associated repo.
we got about 2 main developers with 4 other guys that may also contribute code to the site. each have their own local dev environment and we all got a common sandbox - where we make sure the stuff we wrote doesn't break someone else's module (it has happened before!).
we use svn commit hooks to update the beta/staging/sandbox site upon commit.
with all that setup, [re]deploying a site is a simple matter of going to the proper folder and issuing a "svn co http://repolocation/reponame ." and then updating the DB.
two last things to consider:
we are moving from svn to git
the features module will allow you to save changes you make to your own modules (views, content types, etc) and package all that into a deployable module so you don't have to duplicate your efforts. we are also looking into using this for ourselves.
I hope this helps you.
I second using backup_migrate. It's great.
When I'm installing a fresh site from development to production, I:
backup the site using backup_migrate module
copy all the files up to the server
edit the sites/default/settings.php to have the right database path and account info
do an import of the last backup_migrate dump (usually using mysql < backupfilename.sql, unless I already have drupal setup and have backup_migrate installed, then I use the GUI
But take a look here for the official version:
http://drupal.org/node/776864
Now, you didn't ask, but when the site is live and users are contributing content, moving future development versions of your site from development/staging to production without blowing away live content is a whole different problem, and one that Drupal doesn't have a good answer for...
Andy-

Drupal: how to upgrade a running production website to a dev version?

Can you help me to understand, how do I do Drupal website deployment and development?
Suppose, I developed 1.0 version of Berty&Frank website. I copied everything to their production server and it is alive and kicking now. Site is already full of contents and is growing.
I am asked to add additional features to the website. I am now experimenting with the way how I can implement them in a dev version. I am creating/deleting content types, fill created nodes with demo data just to see how they look like etc. Now I found the way and I want to upgrade production website to the same structure as my dev version now. How do I do that?
Is the only way to manually make every change I made in dev version?
I would explore the Aegir project for the future management of your website. It allows you to clone a site, then to upgrade the site to a new "platform" which could be the next release of Drupal or another Drupal system (such as OpenAtrium).
More can be found at the aegir wiki.
You can export/import views and contenttypes, but a lot of settings etc is stored in the db. This gives two options
Either to use something like backup & migrate to import your settings from dev. This wont work if you have test data though, as you would overwrite the db.
The other options is to repeat what you did on the live site.
A third options could be to take a fresh dump of the live site, do all the settings in that db in dev environment and overwrite the live db with that. You could loose some comments etc, but shouldn't be a big deal.
I use Subversion, and just do an update on my production server when I am satisfied with the code on my development server (actually, I have a staging server that is a duplicate of the production machine, so I update that before the production; I can see any bugs that might pop up).
For database changes, I haven't found anything better than just keeping track of my changes (usually adding/modifying CCK fields) and performing the same changes to the production database. I also download my production database regularly, so that dev and staging have almost the same content. That helps to minimize the confusion.
read http://www.drupal.org/upgrade/

Resources