I never know what to do when my Wordpress installation tells me there's an update available. I am using version 2.8 so whenever there is an update, all I have to do is click update, some magic happens behind the scenes, and it gets updated. But should I create backup files? And how? I have custom themes and plugins that I don't want to get lost because I don't have backups! Is it safe to assume that nothing bad will happen when you click the upgrade button? What is your process when you decide to upgrade to the newest version?
Backup the database, wp-content directory and configuration files first.
There are plug-ins to make this easier, but since you're asking on StackOverflow, I'll assume you could write a script to do it yourself. While you're at it, add the script as a cron job.
http://codex.wordpress.org/WordPress_Backups#Backup_Resources
Always backup before making a big change like that.
You'll want to copy all your files to a safe place via FTP. Copy 'em, zip 'em up, and keep them somewhere safe where you can remember where they are. You'll also want to backup or "export" the database and keep that also safe. This way if something goes wrong, you can restore it to the way it was.
There's a good backup script here for Wordpress sites:
http://www.guyrutenberg.com/2008/05/07/wordpress-backup-script/
based on Bash and bzip2.
I usually don't update nothing in production without testing it first, unless is a simple modification and is about security (like the 2.8.4 update).
The ideal thing to do is create a separate installation to be a test server: can be in your local machine, or just a whole different installation in your server. Why? Remember you have plugins installed and some may break, updating everything can't be a "blind" decision!
So, before updating in the production installation/server, always test in the "test environment".
Nothing is worse than having your website down because of an update error.
Related
I have a WordPress website up and running with many plugins installed on it and a huge database, I need to use chef-solo in order to create an environment in which can install the same website with all its plugins and and also importing its database.
I need it to be like, using chef to install the same website on a different server, exactly the same
Now here are my questions:
I know we can use chef to install WordPress but can we set it in a
way that we don't need to configure the the WordPress and everything
is already set once its running?
What to do with the plugins? can we install them using the chef or
now that should be done manually?
How about importing the database, that can be done with chef-solo
as well?
The whole website is on git, can I somehow import the whole
thing?
is there any other issue I may possibly face? if I want do that?
There is a wordpress cookbook openly available for chef.
When you mean configure, I take it you mean setup data in the database. Assuming that you've separated the database instance from the server instance, and you're attempting to scale up the number of servers then you should be able to skip data setup. You should be configuring the new server instance (node) to point to the same database via Chef.
I stumbled into this question looking for the answer to this question. From what I can tell the start may be here.
Kind of hand-wavy, but this should enable you to do some wordpress stuff via the command line with Chef, rather than the point and click it prefers.
As per #1, you should not need to import the database. If the database goes down, you'll want to focus on that as a separate but connected recipe, since then you'll want to be taking snapshots and uploading them somewhere like S3 via a cron job. I believe there are plugins that can enable this.
You'll have to be a little more clear by "import". If it's in a code base you may be able to short-cut your cookbook path by pulling down the git repo onto the host. You may want to look at git-archive.
Other issues that I'm looking at are images. We're migrating from a hosted solution to AWS, and it appears that instead of storing the images in the database, word-press pulls them into a local directory. This means that if we scale to > 1 host, we'll have issues with images. Something to think about, there's a wealth of plugins that can probably solve this.
Hope this is helpful,
Ben
I'm not a WordPress developer, but I'm trying to determine the optimal way for a team of folks to work with WordPress. For a Rails project or most anything else, it's easy to work locally and deploy upstream, but my understanding is that WordPress doesn't make this quite as easy. Maybe this is a myth?
From what I gather, it's not uncommon for URLs and file paths to be stored in the database which seems like it would make it difficult to deploy a WP project from dev --> stg --> prd (where each environment has it's own URL and possibly different file paths), much less for individual developers to have their own dev environment that would need to be "merged" into a unified copy for deployment.
I could configure all developer sandboxes to use a single database, but here again, if URLs and file paths are stored then nothing is gained.
There are a series of smaller questions here, but the more I think about those, the more I realize that what I'm really asking for is advice about how to structure things for optimal development of a WordPress site that will be hacked on by a team of developers. I'd prefer the sandboxed approach we use for other projects, but I have no idea if/how things can be unified once all development is complete.
Warning: incoming wall of text..
#Rob, WP is hell when it comes to working in teams; however, with a little work (and some symlink magic) you can set up your WP projects so that your working files for your themes or plugins can reside separately from the WP core. Some of this uses WP's built in mechanisms, some of it is related to SVN externals (hint). I'll let you google that since it's outside the scope of your question.
A note on WP GUIDs
WARNING: DO NOT replace guids. WP GUIDs are there for external feed readers. Feed readers use the GUID to determine if the content is recent. Changing it basically tells those readers that every entry in the feed is new (especially for posts.) That introduces a lot of extra overhead for legacy content that you just don't need. GUID are a legacy feature that should have been changed a long time ago to UUIDs. Technically, you can use anything int he guid field, but WP uses the permalink to populate that field -- legacy.
The only time it is ever acceptable to change the GUID is for new wp projects where content is brand-spankin' new.
To answer your question:
WP stores explicit references to the current domain in a dozen places in it's DB. These locations are a pain to track down and change, and the last thing you want to do is deal with manual edits to a *.sql dump file that you're going to import into production. It just smacks of bad development practices.
There's a couple ways to get around this, but it means a little bit of work if you're already further down your development lifecycle. I'll address the first case.
Case 1: Project Onset
When you're starting the project, you'll likely have a development sandbox and DB ready. You'll likely have WP already installed by now, so it's essentially clean for all intents and purposes.
The first thing you're going to want to do is change how your config file works. Most folks keep with the standard wp-config.php file (beyond a team production project, there's not really any reason to edit it.) However, you can set it up with some logic to include developer-specific or environment-specific config files. For example:
wp-config.php
switch( $current_environment )
{
case 'jack.local' : include( 'wp-config-jack.php' ) break; // Jack's sandbox
case 'jill.local' : include( 'wp-config-jill.php' ) break; // Jill's sandbox
default : ... break; // Staging & Production
}
The next thing you're going to want to do is include the normal contents of the wp-config.php file in a wp-config-remote.php file for use on staging/production. Next, edit your wp-config-remote.php file so that you can use 1 config file across multiple environments (staging,production). An if(...) or switch(...) block is all you need, e.g.
if( (strpos( $_SERVER[ "HTTP_HOST" ], "localhost" ) !== false) || (strpos( $_SERVER[ "HTTP_HOST" ], "local" ) !== false) )
(There are better ways to write that condition... this is just a crude example.)
Configure all of your WP settings specific to each of your remote environments. Hopefully you'll be checking this into a source control repository.
That basically frees you up to let your team have config settings specific to their environment, while letting you check in settings for each of the remote environments once.
The second thing you're going to want to do is build a mechanism to intercept and filter domain-specific links. The intent behind this mechanism is to replace any references to the current domain with a token/placeholder. I've outline the technique to do this here: http://www.farfromfearless.com/2010/09/07/url-token-replacement-techniques-for-wordpress-3-0/
It basically amounts to creating a filter that acts on the content before it's submitted to the DB and before the content is rendered to the page. The technique is transparent in that it won't affect normal editing practices. You can still create your content in the editor, reference other pages, posts, images, etc. and they'll show up just fine while editing in different environments.
In recent projects, I've wrapped all of this and a few other WP "normalization" features into a single bootstrap plugin that I set & forget.
Case 2: Project Ongoing
Now, in your case, you're further along in your development lifecycle. It's going to take some work to replace those domain references, but if you follow the steps I've outline above you should only ever have to do this once. The link I supplied above gives you the SQL you'll need to do that job. It's important to note that in a multi-site environment, you'll need to do this for every "sub-site" you've created.
Once you've updated your DB, I suggest implementing the steps in CASE 1 so you don't have to repeat the steps again.
Bonus: synchronizing content
Synchronizing content is a pain. What I've done in recent projects is had clients work on the staging server and promote changes upstream to production. So then, that leaves you with synchronizing downstream to your sandbox(es). Write a shell script that dumps a copy of SPECIFIC content tables from your staging DB, and imports them into your sandbox DB (effectively replacing content tables.) You should be able to see the benefit of the domain-token-replacement technique.
Images that aren't checked into source control, e.g. client images should be pushed to a common location, e.g. an S3 Bucket. There are WP plugins that can help you with that. That'll save a lot of time having to synchronize assets across environments.
I hope this helps you out -- if not, there's always SilverStripe ;)
It easy we build on a dev server and move to live server with this sql query:
UPDATE wp_posts SET guid = REPLACE(guid, 'devserver.com', 'liveserver.com');
UPDATE wp_posts SET post_content = REPLACE(post_content, 'devserver.com', 'liveserver.com');
UPDATE wp_options SET option_value = REPLACE(option_value, 'devserver.com', 'liveserver.com');
Last year I wrote a bash script to mirror a live MU installation into a sandbox. It's not perfect and not ideal, but a good starting point. It consists of mirroring the databases, files and rewriting the mirrored database to reflect the sandbox.
See http://pp19dd.com/2011/01/bash-script-to-mirror-wordpress-mu-installation-into-a-sandbox/
It's important for developers to be able to take live and exact content snapshots to replicate conditions.
I just ran into this issue myself launching a new website. My solution was to use Vagrant. Vagrant is also platform agnostic so you could be developing on a Mac and a teammate is using Windows. Same Vagrant project runs on both.
I wrote a guide on how to set Vagrant up with Wordpress from a production environment running locally on your machine. I don't use Wordpress that often but every time I do its always a hassle to setup Apache and PHP on my Mac, then make sure all the Wordpress site urls are updated in the database.
Once you've configured your Vagrant project, its a single command for any developer on your team to be up and running with a local instance of Wordpress. In short, Vagrant will mount your project directory from your host machine in the guest machine and run Apache, MySQL, PHP through the guest machine. You still use your host machine's IDE (as you normally would) and your host machines browser. There is no uploading of files anywhere, its just code, save, refresh browser all on your local machine.
http://www.distilnetworks.com/wordpress-development-with-vagrant/ <- Explains how to set it all up with Vagrant
https://gist.github.com/markmalek/fd2e6e65385400d9cd47 <- the shell script when provisioning Vagrant
The shell script could probably be a lot better, this is what worked for me but I would love to hear some better suggestions or ideas. I'm new to Vagrant and now use it for some of our other projects so thought it would be a good fit here as well.
I do update the GUID in the shell script, which I've read in another answer that you shouldn't do because feed readers use this. In this case its irrelevant since its just for your local instance of Wordpress but I wouldn't make this change in production. See this answer for better explanation.
Not a big deal, simply backup you whole site with all in one wp migration plugin and import on live server installation, plugin will replace all url automatically.
Our code is in SVN. We develop using Visual Studio and the AnkhSVN plugin.
Having used VSS before SVN I was used to the idea of locking files so other users know not to edit it while you are (in fact I thought this was the main point of source control, to prevent lost data from these conflicts).
I've been told this rarely happens and cases where you can't work because another dev is locking you out are more frequent (which sounds like a principle that might only apply to a certain subset of dev projects). But anyway, SVN is better and we're using it.
So when I do edit a file, and go to check it in, and find out the other user has edited it too, what do I actually do?
Surely there's a better way than saving a copy of my file, reverting changes, updating it from server, then merging my changes back in with winmerge? When I right-click the file and click 'merge' I'm told I should update first, so that's obviously not what I need.
.
Update: partial answer
OK, it sounds like I just hit update, then SVN merges non-conflicting changes automatically, and should let AnkhSVN know about any conflicting changes to allow some kind of resolution. Does anyone know how this works in AnkhSVN - what I'd actually do?
(if not I'll try it myself, accept the current top answer and update this question with the second half for posterity).
Actually, that's exactly what you need.
Edit: Clarification, what you need to do is just hit that update. You don't need to make a separate copy, revert, etc. Updating from the repository will merge those changes with your own.
When you do the update, where you have local changes to a file that has also been changed in the repository, SVN will merge the file in the repository with your local file, preserving both sets of changes.
In effect, it should do what you would do with Winmerge automatically.
If the changes are conflicting, typically that they occur within the same lines, there will be a merge conflict, which has to be resolved. Not knowing AnkhSvn, I don't know what it will do in this case, but it should have some means of fixing things. Usually it involves looking over 3 files (your local file, the repository file, and the result of a successful merge) where you pick each part you want to keep from the two changed versions of the file.
After you've updated your local copy, merged and fixed any conflicts, you commit as usual.
It´s not an direct answer to your question, but I would recommened to use SVN-Monitor in addition to AnkhSVN or any other Subversion client like TortoiseSVN.
With it you can watch your repository and will be notified by changes in your repository. So you can see what other devs did in the repository and probably see if your commit will conflict with other checkins or if you can update your local copy without any effect/conflict
After months using Drupal for my websites I noticed module uninstall tab in the modules list.
I've always uninstalled my modules by deleting their folders from filesystem (after disabling them). I was wondering if this is the wrong way to remove them.
Thanks
When you uninstall the module, rather then just disabling and removing, you let the module clean up after itself, including:
Delete tables it created
Delete system variables it used.
So uninstalling is a good clean, for the health of your database. Things will work fine not doing it, but why keep unused tables in your database?
Note:
It's up to the module creator to create the code needed to do the clean up, so not all modules do this very well.
I've always uninstalled my modules by deleting their folders from filesystem (after disabling them). I was wondering if this is the wrong way to remove them.
The files used from a module is not the only thing a module leaves on a Drupal site; there are database tables, Drupal variables, cached values that still need to be removed, when uninstalling a module. It's also possible that a module adds rows into a database table created from a different module.
Deleting the module files, you are not removing any reference to the module contained in the table system. This means that if you are copying back the same module after you deleted it, and you did delete its tables, the module is not going to re-create the database tables it needs.
I would definitively say it is the wrong way to uninstall a module.
Uninstalling is sometimes needed, as hook_install() won't fire if a module is just disabled. So if for example a module has some corrupted data, disabling and re enabling won't remove that.
You will probably be ok in your approach. However one thing to be weary of is doing the following.
Disable
Remove folder
At a later date put module back again (not the same version)
Uninstalling.
The reason for this is hook_install() and hook_uninstall() should be mirror images of each other. Update hooks are used to keep a module schema and settings up to date with what is provided in hook_install(), if you were to use an updated module to uninstall (without updating) it will be trying to uninstall a different set up to what is expected. The risk is slim that something would go badly wrong, but it is worth being careful.
The uninstall tab will remove anything in your database related to the module. This operation requires the module to still be present in your modules directory.
Simply deleting the files isn't 'wrong', but it will leave unneeded cruft in your database. The uninstall tab will not remove the module files for you, you need to do that yourself as you have been doing.
Drush makes the module uninstall process much more pleasant, with something like:
drush pm-disable [module] // or its shorthand drush dis [module]
drush pm-uninstall [module]
In fact, Drush makes just about everything more pleasant (downloading/installing modules, dealing with install profiles, creating DB backups, and my personal favorite, updating your entire code base). If you're not already using it, I highly recommend giving it a try.
You can check this link - modules may have some additional uninstall instructions for them, but it looks like majority of them don't - that's why you didn't have any issues :)
I have been developing a Drupal 6 site on my PC using XAMPP. I'm done now, and everything looks peachy.
Problem is, I need to put all my content (including custom modules and themes) up onto a staging server which only has a fresh Drupal 6 install on it. I can't imagine having to set up all my custom content types and whatnot all over again on the staging server.
So I ask, how does one go about doing what I need to do? Which is essentially duplicating my Drupal install from my PC, to the staging server.
The staging server is running Linux, and I develop on a Windows PC, if that helps.
Thanks in advance.
Copy up all the files from development to live, and mysqldump your database and run that on the live server. Then all you have to do is change the settings.php file to point at the right database, if for some reason 'localhost' is not also your mysql database.
The quickest solution is probably the backup_migrate module. It is only a tool to copy your database. You could also use phpmyadmin or similar instead if you wanted. The backup_migrate module do have some good defaults settings as to which tables to skip (like cache tables). All the settings etc. that is not defined in code is stored in your db. So you only need to copy the db to be set. You can choose to exclude some tables, like the node or user table if you don't want to bring over your test data.
If you don't use subversion, then you gotta manually copy the files (rsync, scp, whatever) and the db (mysqldump).
what we usually do is have a hierarchy of independent subversion repos as follows:
core
sites/all/modules/contributed
sites/all/modules/custom
sites/all/themes/ (we develop our own and don't use contributed themes)
sites/all/libraries
then we use the svn:externals properties so that if you check out "core" you get every associated repo.
we got about 2 main developers with 4 other guys that may also contribute code to the site. each have their own local dev environment and we all got a common sandbox - where we make sure the stuff we wrote doesn't break someone else's module (it has happened before!).
we use svn commit hooks to update the beta/staging/sandbox site upon commit.
with all that setup, [re]deploying a site is a simple matter of going to the proper folder and issuing a "svn co http://repolocation/reponame ." and then updating the DB.
two last things to consider:
we are moving from svn to git
the features module will allow you to save changes you make to your own modules (views, content types, etc) and package all that into a deployable module so you don't have to duplicate your efforts. we are also looking into using this for ourselves.
I hope this helps you.
I second using backup_migrate. It's great.
When I'm installing a fresh site from development to production, I:
backup the site using backup_migrate module
copy all the files up to the server
edit the sites/default/settings.php to have the right database path and account info
do an import of the last backup_migrate dump (usually using mysql < backupfilename.sql, unless I already have drupal setup and have backup_migrate installed, then I use the GUI
But take a look here for the official version:
http://drupal.org/node/776864
Now, you didn't ask, but when the site is live and users are contributing content, moving future development versions of your site from development/staging to production without blowing away live content is a whole different problem, and one that Drupal doesn't have a good answer for...
Andy-