How to migrate plone 4.2 from debian squeez to wheezy - plone

I have a plone 4.2 instance in debian squeeze. Now I am trying to migrate it to a new server running wheezy: copy the plone directory from squeeze to wheezy--> run 'plonectl start'. It fails and error is shown : can not find 'plone' user.
What should I do to migrate it? Just to add user and group 'plone'? Will this affect plone_buildout and plone_group in plone 4.3?

You can nearly never move a large software package from one machine to another by simply copying the files. In this case, for example, the Plone installation is set up to run as a particular system user, "plone", and you don't have that user ID on the new machine.
Instead, do a fresh install of Plone on the new machine, then copy over the data and customizations. The order should probably be:
1) Do a fresh install on the new machine, modeled as closely as possible on the old; test it;
2) Copy the instance .cfg files and anything in ./src; test it;
3) Note the ownership and permissions of your sources; tar it; unpack on the destination machine; check ownership and permissions; test it.
If you are migrating from one Plone version to another, you should not simply copy the .cfg files. Instead, transfer customizations from your eggs= and develop= lists. Test to see if any customizations break the new Plone. Consult the upgrade guide for details on package migration. When transferring your data, match the ownership and permissions from the target, not the source. Run the Plone migration process for each Plone instance in your install. 4.2 -> 4.3 is a largely painless upgrade.

Related

Drupal 9 in an airgap or without composer

Does anyone have any experience with Drupal 9 either without composer or in an airgap? Basically we're trying to run it in an airgapped server. Composer obviously wants to access the internet for checking and downloading packages.
You'll need to run composer to install your packages and create your autoload files to make it all work.
You could create your own local package repository and store the packages you need there, however this would be a large undertaking given all the dependencies Drupal Core and contrib modules use. You'd need to manage them all yourself, and keep your local versions synced with the public versions, especially for security updates.
If you need to do that anyway, you're better off just using the public repos.
Documentation on composer repos is here:
https://getcomposer.org/doc/05-repositories.md
Near the bottom it shows how to disable the default packagist repo:
https://getcomposer.org/doc/05-repositories.md#disabling-packagist-org
A much simpler alternative would be to do development in a non air gapped environment so you have access to the packages you need and can run composer commands to install everything. Then once your code is in the state you need, copy that to your air gapped server to run. Once composer install has run it is not required to do anything else. Just make sure you include the vendor directory with all your dependencies, as well as drupal core and contribs.
The server you run your Drupal instance on, does not even require composer to be installed.

How to install graphite and all its prerequisites as a user without root permissions?

We are trying to install Graphite to capture neo4j database metrics. The installation will be done under the neo4j user which does not have root permissions. On the web there are multiple pages which detail this procedure but most of them fail at one stage or another. Is there a way to install all components of graphite and its pre-requisites using a non root user?
If you don't have root, then you are most likely not supposed to install applications on that server and should ask you system administrator to install it for you.
That said. On which pre-requisite does it fail? You can install all Python parts in virtualenv. For cairo and other system requirements it's a bit harder but still doable. You'll also have some issues getting it started automatically after reboot.
I'm actually working on making installation easier and updating the documentation. I could use some feedback.
https://github.com/piotr1212/graphite-web/blob/setuptools/docs/install.rst#using-virtualenv <- work in progress, you will have to follow virtualenv -> install from source, but then replace "graphite-project" with "piotr1212" in the github url's and git checkout setuptools in every directory before runnin pip install .

Using GIT to keep operational systems up to date

I'm not a Git novice but also not a guru either and I have a question. We want to create remote repos that appear as folders within our network. The folders will actually contain a large legacy ASP app running in a production manner.
Then we want to be able to make local changes and be able to push commits to these networked repos and thus dynamically update the production application.
We already have the repo on Github and our developers fork that and work locally (we use SmartGit for most day to day stuff).
However (because the app is huge and legacy) we have (prior to using Git) always had a process for copying changed files to the target systems (production, QA etc).
But it dawned on me that we may be able to treat the operational system "as" a repo that's checked out to master. Then (when suitably tested) we want to simply use SmartGit to do a "push to" the operational system and have the changes delivered that way.
I'm at the edge of my Git knowledge though and unsure if this is easy to do or risky.
We don't want to install Git on the operational machine (its restricted to running Windows 2003 - yes I know...) so want to simply treat the remote system just like it was a local folder - with Git installed on our local machines.
Any tips or suggestions?
My tip: don't bother.
You can only push to bare repositories. These are such that they only contain the files normally residing in .git, with no working directory at all. So you cannot "run" those on the server. You would need to push to a bare repos on the server, and then clone/checkout that bare repos into a non-bare local repos on the server itself (which can be done in a post-receive hook inside git). But as you said, you cannot even install git on the server. So git push does nothing for you.
Second option would be to mount the servers filesystem on whatever staging/deployment machine you have, presumably one which you can install git on. Then you can git push into a bare repos on that deployment machine, run git hooks, and copy newly pushed stuff into your non-git server filesystem.
Third option would be to package everything up locally, make a tarball (or, I guess, zip-ball...) and just unpack that on the server.
So. Automated, continuous deployment => great idea. Using git => great idea. Directly using git push, not so much, mainly due to your constraints.

What is the recommended procedure for upgrading to a new maintenance build of oXygen XML?

Periodically we receive announcements of new maintenance builds of Oxygen XML Editor. It's easy to locate documentation on installing new versions, but I was unable to find any instructions on installing maintenance builds.
In the past I've renamed the downloaded folder, e.g, "17-1", which completely duplicates all the files in Applications (I'm using OS X), then later on deleted the older folders when it seemed safe to do so.
I would like to know the best-practice, most efficient way to routinely install these frequently released maintenance builds.
Since there is no Oxygen installer for OS X (it's just an archive), there is no straightforward way of upgrading (installing in the same folder), like there is for Windows or Linux.
The official upgrade procedure for maintenance builds (it's the same for minor version updates) goes like this:
To upgrade:
For Windows and Linux you can install the new build in the same folder as the previous installation, it will automatically upgrade it.
Before you upgrade, if you have added files or made changes to any of the files from the Oxygen installation folder (especially the frameworks folder), you may want to create a backup of them because they will be overwritten during the upgrade procedure. Custom frameworks will be preserved but we recommend backing them up anyway, just to be safe.
For Mac OS X you will have to either move the old folder from Applications to a different location and put the new version of Oxygen in its place, or install in a different folder. You can then copy any files you may have changed from the old folder (if any) to the new folder.
The Oxygen preferences will be preserved since they are located elsewhere (user home folder).
What I'd like to add is that, if you have custom frameworks and want to keep Oxygen up to date, it's a good idea to keep the custom frameworks in a different folder (from your user home) than the Oxygen installation folder and simply configure Oxygen to load them from that folder (Options > Preferences, Document Type Association > Locations, Additional frameworks directories). This greatly simplifies the upgrade procedure.
Regards,
Adrian
According to a colleague, his way of doing it, FWIW:
I keep all oXygen stuff in the directory /Applications/oxygen
When I get a new oxygen.zip download, I put it there, unzip it, and rename the directory to the oXygen version name. So right now I have
/Applications/oxygen/17.0
I usually compress the previous version and delete the directory for it, but keep the zipfile for a while in case I need to revert to the
old version
I keep the related jarfiles in /Applications/oxygen/lib so that they don't live in the same directory as an oxygen version that might get
upgraded
I create an alias under /Applications named "oxygen" that points to whatever current version of oXygen Editor I'm using (and it needs to
be updated whenever the current directory changes)
I can't accept this as the best answer unless I receive confirmation that this is the ideal method on Mac OS X. If there is another proposed procedure that is conventionally accepted as the best practice, or a definitive answer from an authoritative source, then I will accept that answer.

Working with Drupal in latest Openshift version

I have a couple of working Drupal installations on OpenShift. I prefer OpenShift for testing and development.
However of late when I try to install a new instance of Drupal the 'php' folder is missing and as a result editing of any files is a real nightmare. I decided to create a 'php' folder and build my drupal from there.
The challenge however has been that with every update that I push my settings.php file is deleted and I have to fix it via SSH just to get working. This is a real bother and am looking for a better alternative to working with Drupal in peace on OpenShift.
The QuickStart has changed to automatically deploy a Drupal instance in your data dir and symlink the changes. This means at create time you get the latest Drupal and you can configure and deploy Drupal live. If you prefer the old model, just copy the php folder off the gear and check it in to your source repo. The hooks (which you can also change) won't deploy Drupal if you have a php folder - meaning your base Drupal still works.
We made this change so that folks creating a new instance got something running immediately that was up to date with security patches (and so you install modules to the server directly). Settings.php lives in your data directory and is symlinked in vs being copied.
You can continue to use the old Drupal-example repo as well.

Resources