I have a couple of working Drupal installations on OpenShift. I prefer OpenShift for testing and development.
However of late when I try to install a new instance of Drupal the 'php' folder is missing and as a result editing of any files is a real nightmare. I decided to create a 'php' folder and build my drupal from there.
The challenge however has been that with every update that I push my settings.php file is deleted and I have to fix it via SSH just to get working. This is a real bother and am looking for a better alternative to working with Drupal in peace on OpenShift.
The QuickStart has changed to automatically deploy a Drupal instance in your data dir and symlink the changes. This means at create time you get the latest Drupal and you can configure and deploy Drupal live. If you prefer the old model, just copy the php folder off the gear and check it in to your source repo. The hooks (which you can also change) won't deploy Drupal if you have a php folder - meaning your base Drupal still works.
We made this change so that folks creating a new instance got something running immediately that was up to date with security patches (and so you install modules to the server directly). Settings.php lives in your data directory and is symlinked in vs being copied.
You can continue to use the old Drupal-example repo as well.
Related
I'm working on figuring out deploying a Wordpress site from BB Pipelines to AWS EB. It all makes sense except for one thing, I want the repository to only contain the theme and plugin files.
I do not want the full WP directory being deployed each time. Media would be handled via an S3 bucket and the DB would use RDS.
What is the best way to get WP installed but only have the theme and plugins deploy through Pipelines? And when I want to update to the latest version of WP, how would that work?
Or am I going about this wrong?
The simple solution, and best practice in my humble opinion, is to repo the entire WordPress installation, including the WordPress core, and all your custom themes and plugins.
Having the entire installation in one repo solves many problems: you can can tag and release versions, and you can install all software locally with a simple git clone.
Regarding the file system, definitely consider EFS instead of S3. It is much more reliable and easier to mount in a linux-based system. Make sure and define set the file path environment variable, so you can point WordPress to the files. You will want mount this outside of the software file tree.
I have been running this kind of setup for 3 years with no problems. We do releases through the code deploy service daily. Very straightforward and easy to maintain.
To upgrade WordPress just check out the current version from the repo, then apply the upgrades release, do comparisons, test commit and release.
There are many wonderful tutorials describing in great detail how to set up a horizontally scaled WordPress install in AWS' Elastic Beanstalk - that part is no problem. But I haven't found any follow-up advice yet on how to manage plugin updates after the initial setup, let alone updating wordpress-core itself. Does anybody know the most optimal way to do this?
This is the methodology I'm using so far, but I'm not sure if it is the best way:
Download the plugin's update file and unzip it. Remove and replace the relevant folder in /wp-content/plugins (local git repo)
Run the update in the live site like normal - to ensure that any database changes get pushed up to the RDS
eb deploy from the local repo to commit the file changes and make the update persistent
Is that a sane method? Could anything get corrupted down the line?
For updating wp-core, the tutorials I've read seem overcomplicated - basically rebuild the site from scratch every time an update comes out. Below is what I have been using (used it successfully for WP 5.0.2). Is there any chance of files and databases getting out of sync using this method?
Download and unzip the new wordpress version locally
Replace wp-admin, wp-includes, and the root files except for wp-config.php (local git repo)
Run the update in the live environment, so that any database changes get pushed up to RDS.
eb deploy
I've been running with the above methods for a while and feel pretty confident that they are sound. I only have a couple of tweaks thus far.
The following assumes an environment where there is one staging server outside of the horizontally scaling live environment. This could be further improved for a multi-developer environment using AWS Code Commit.
For Plugins:
Run the plugin update normally on the staging server (in wp-admin). Test everything to make sure the update is sound.
Remove the plugin's old folder from your local git repo and download the updated folder from the staging server using SFTP.
In the local repo, run git add -A && git commit -m "updated Plugin Name" && eb deploy
Run the same update in Live (in wp-admin). It will only apply to one server, but should guarantee that any database changes get pushed up to the single RDS.
Roll out the change to the live environment using the Software Versions page in the AWS Console (in Elastic Beanstalk)
Updating WP Core is almost identical except that instead of removing and replacing a single plugin directory, you will need to remove and replace /wp-admin/, /wp-includes/ and all of the files in the root folder except for wp-config.php
I usually deploy my vendors with a simple composer install in production.
I would prefer not using composer in production, so I'd need to build the vendors from my machine and deploy them in production.
I could copy the vendor directory but I'll certainly have to install other files like app/bootstrap.cache.php or other autoloader.php
2 questions so:
What are these files I could install/update ?
Are there any known practices to deploy pre-built vendors anyway ?
I would say the procedure is pretty straighforward (at least it works for me that way): To deploy your application, you'd create a new directory, export the code from a tag into it (i.e. you don't export all repository-managing data like a .git directory). You then run composer install --no-dev, which will do some work, and should also run anything that is mentioned in the scripts in the composer.json file.
The result in this previously empty directory goes to the production server in whatever way you like, be it SCP, SFTP, rsync... There is no real "magic" going on here, essentially it is copying of files.
You may want to make sure you can roll back quickly, so I'd recommend to deploy every version into a designated directory, and then link the current version with a symlink. As an example: You had deployed your old version in /srv/www/htdocs/app-1.0 and symlinked the directory /srv/www/htdocs/app to point to this directory. The vhost uses the generic app directory to serve the app.
The deployment will create a new directory /srv/www/htdocs/app-1.1, and putting it live will simply delete the old symlink and create a new one to the new directory. This should put your new version live instantly. Rolling back would mean to delete the symlink and create the one pointing to the old version again.
YMMV, because things like caches will affect the outcome, but this is not in the scope of how and where to use Composer to deploy software.
I ran across the VCCW project and despite my unfamiliarity with Vagrant and Chef, decided to give a try. I followed their instructions and obtained the VCCW project itself by installing the GitHub Windows program and cloning the VCCW GitHub master repository. I should also mention that I have very little experience with Git.
Anyway, now I have VCCW Wordpress running on my machine, but I've no clue what to do from here. I wanted to set up a better and more formal Wordpress development environment so I could write my plugin and modify a theme, but I don't know where I should do that. I know where the actual Wordpress installation resides on my file system, so I suppose it would be easy to work from there, but I don't know how (if at all) that interplays with the Vagrant workflow - ie, when it comes time to use Vagrant to deploy my site, will my changes to the "www" folder (which was created by vagrant up) be captured? Somehow I doubt this. Just looking for any help as to how all these fancy new tools work with each other and what a humble PHP developer like myself should do to get started.
Edit: One more question: which IDE, if any, can I use in conjunction with this arrangement? Create a new project from existing sources, and let it pollute my deployment folder with project files?
From the Vagrantfile, it looks like you should look in ./www/wordpress after provisioning.
VCCW includes deployment tool WordMove.
You can deploy WordPress into your server from vccw if you set Movefile.
Movefile will be created automatically after provisioning, so you should add your server configuration in Movefile.
http://vccw.cc/#h2-10
https://github.com/welaika/wordmove
I have been developing a Drupal 6 site on my PC using XAMPP. I'm done now, and everything looks peachy.
Problem is, I need to put all my content (including custom modules and themes) up onto a staging server which only has a fresh Drupal 6 install on it. I can't imagine having to set up all my custom content types and whatnot all over again on the staging server.
So I ask, how does one go about doing what I need to do? Which is essentially duplicating my Drupal install from my PC, to the staging server.
The staging server is running Linux, and I develop on a Windows PC, if that helps.
Thanks in advance.
Copy up all the files from development to live, and mysqldump your database and run that on the live server. Then all you have to do is change the settings.php file to point at the right database, if for some reason 'localhost' is not also your mysql database.
The quickest solution is probably the backup_migrate module. It is only a tool to copy your database. You could also use phpmyadmin or similar instead if you wanted. The backup_migrate module do have some good defaults settings as to which tables to skip (like cache tables). All the settings etc. that is not defined in code is stored in your db. So you only need to copy the db to be set. You can choose to exclude some tables, like the node or user table if you don't want to bring over your test data.
If you don't use subversion, then you gotta manually copy the files (rsync, scp, whatever) and the db (mysqldump).
what we usually do is have a hierarchy of independent subversion repos as follows:
core
sites/all/modules/contributed
sites/all/modules/custom
sites/all/themes/ (we develop our own and don't use contributed themes)
sites/all/libraries
then we use the svn:externals properties so that if you check out "core" you get every associated repo.
we got about 2 main developers with 4 other guys that may also contribute code to the site. each have their own local dev environment and we all got a common sandbox - where we make sure the stuff we wrote doesn't break someone else's module (it has happened before!).
we use svn commit hooks to update the beta/staging/sandbox site upon commit.
with all that setup, [re]deploying a site is a simple matter of going to the proper folder and issuing a "svn co http://repolocation/reponame ." and then updating the DB.
two last things to consider:
we are moving from svn to git
the features module will allow you to save changes you make to your own modules (views, content types, etc) and package all that into a deployable module so you don't have to duplicate your efforts. we are also looking into using this for ourselves.
I hope this helps you.
I second using backup_migrate. It's great.
When I'm installing a fresh site from development to production, I:
backup the site using backup_migrate module
copy all the files up to the server
edit the sites/default/settings.php to have the right database path and account info
do an import of the last backup_migrate dump (usually using mysql < backupfilename.sql, unless I already have drupal setup and have backup_migrate installed, then I use the GUI
But take a look here for the official version:
http://drupal.org/node/776864
Now, you didn't ask, but when the site is live and users are contributing content, moving future development versions of your site from development/staging to production without blowing away live content is a whole different problem, and one that Drupal doesn't have a good answer for...
Andy-