How to implement a multidev environment / workflow? - wordpress

Context: switch from Pantheon
We are currently hosted with Pantheon, an opinionated platform, and we liked their workflow (code up from dev to live, content down from live to dev) and used it with their multidev environments (i.e. easy cloning and pushing all around). - After 6 years they're changing their pay structure for the first time, which includes making multidev prohibitively expensive for just one site.
Usage: multidev for feature branches
We use multidev for having feature branches. For example, one might be a proposed re-design of a page - that way I can keep working in one instance without polluting the dev environment with a half built feature, while working in another to keep hammering out minor fixes and improvements and push those through to dev, qa and then live.
Much later (when the feature branch is finally done) it gets merged with the recent version and pushed to dev, qa, and then live.
Goal: new flow
I need to either come up with a way to mimic this workflow that is easy enough that it actually gets done (manually setting up subdomains and migrating a copy of the site, etc. seems like a lot of idle work) - or replace this one with a different sane workflow.
It's a custom WordPress site with one to a few devs working on features, plus QA, PM, etc. who need to be able to preview the feature branch before it's even allowed into the dev environment.

Related

CMS - How to work with multiple environments? Do I really need them?

I've never worked with any CMS and I simply wanted to play with such ones. As originally I come from .NET roots, so I was thinking about choosing Orchard Core CMS.
Let's imagine very simple scenario, together with my colleague I'd like to create a blog. As I'm used to work with web based systems and applications for a business for me it's kinda normal to work with code repository, having multiple environments dev/test/stage/prod, implementing CI / CD, adjusting database via migrations or scripts.
Now the question is do I need all of this with working on our blog with a usage of CMS.
To be more specific I can ask few questions:
Shall I create blog using CMS locally (My PC) -> create few articles and then deploy it to the web or I should create a blog over the internet and add articles in prod environment directly.
How to synchronize databases between environments (dev / prod).
I can add, that as I do not expect many visitors on a website I was thinking to use Orchard Core CMS together with SQLite. Also I expect that I can customize code, add new modules, extend existing ones etc. - not only add content (articles). You can take that into consideration in answering the question
So basically my question is what should be the workflow of a person who want to create / administer and maintain CMS (let it be blog) as a single person or as a team.
Shall I work and create content locally, then publish it and somehow synchronize both application and database (database is my main question mark - also in a context how to do that properly using SQLite).
Or simply all the changes - code + content should be managed directly on a server let's call it production environment.
Excuse me if question is silly and hard to understand, but I'm looking for any advice as I really didn't find any good examples / information about that or maybe I'm seeking in totally wrong direction.
Thanks in advance.
Great question, not at all silly ;)
When dealing with a CMS, you need to think about the data/content in very different terms from the code/modules, despite the fact that the boundary between them is not always completely obvious.
For Orchard, the recommendation is not to install modules in production, but to have a dev - staging - production type of environment: install new modules on a dev environment, test them in staging, and then deploy to production when it's safe to do so. Depending on the scale of the project, the staging may be skipped for a more agile dev to prod setting but the idea remains the same, and is not very different from any modular application.
Then you have the activation and configuration of the settings of the modules you deploy. Because in a CMS like Orchard, those settings are considered data and stored in the database, they should be handled like content. This includes metadata such as the very shape of the content of your site: content types are data.
Data is typically not deployed like code is, with staging and prod environments (although it can, to a degree, more on that in a moment). One reason for this is that a CMS will often feature user-provided data, such as reviews, ratings, comments or usage stats. Synchronizing all that two-ways is very impractical. Another even more important reason is that the very reason to use a CMS is to let non-technical owners of the site manage content themselves in a fast and direct manner.
The difference between code and data is also visible in the way you secure their changes: for code, usual source control is still the rule, whereas for the content, you'll setup database backups.
Also important to mention is the structure of the database. You typically don't have to worry about this until you write your own modules: Orchard comes with a rich data migration feature that makes sure the database structure gets updated with the code that uses it. So don't worry about that, the database will just update itself as you deploy code to production.
Finally, I must mention that some CMS sites do need to be able to stage contents and test it before exposing it to end-users. There are variations of that: in some cases, being able to draft and preview content items is enough. Orchard supports that out of the box: any content type can be marked draftable. When that is not enough, there is an optional feature called Deployments that enables rich content deployment workflows that can be repeated, scheduled and validated. An important point concerning that module is that the deployment only applies to the subset of the site's content you decide it should apply to (and excludes, obviously, stuff like user-provided content).
So in summary, treat code and modules as something you deploy in a one-way fashion from the dev box all the way to production, with ordinary source control and deployment methods, and treat data depending on the scenario, from simple direct in production database instances with a good backup policy, to drafts stored in production, and then all the way to complex content deployment rules.

How to minimize downtime when upgrading all files of a site and there are lots of tweaks to do?

I need to make some big upgrades in my site (it´s a Drupal 6 site, and I´m upgrading it to Drupal 7), and it´s going to take me a while to complete all the items I need to change (I have to do some tweaks after upgrading all into the Drupal 7 version because some modules have different configurations).
If all goes ok, it should take me just a day, but something could go wrong.
So, what´s the best solution to do a clean transition and have the site offline just for a couple of minutes?
I thought about cloning my site to another folder (and maybe assigning it one of my unused domains for an easier access while working with it), and then do the upgrades inside the cloned account.
So when ready, I would just point my domain to the new cloned folder.
The only downtime would be the time it takes the registrar to change the pointing domain...
What do you do when you need to upgrade a site and do stuff that´s going to take you a while?
Your approach for cloning the website's document root to another location and making sure it all works is what is called "staging" an upgrade, and it is a widely practiced method of ensuring short downtime. It gives you the ability to test your updates before you release them, which is great. Assigning an unused domain is nice because then you can test in a deployed (rather than local) way.

How do I keep compiled code libraries up-to-date across multiple web sites using version control?

Currently, we have a long list of various websites throughout our company's intranet. Most are inside a firewall and require an Active Directory account to access. One of our problems, as of late, has been the increase in the number of websites and the addition of a common code library that stores our database access classes, common helper functions, serialization methods, etc. The goal is to use that framework across all websites throughout the company.
Currently, we have upgraded the in-house data entry application with these changes consistently. It is up-to-date. The problem, however, is maintaining all of the other websites. Is there a best practice or way in which I find out versions on each website and upgrade accordingly? Can I have a centralized place where I keep these DLLs and sites reference them? What's the best way to go about finding out what versions are on these websites without having to go through each and every single website, find out the version, and upgrade after every change?
Keep in mind, we run the newest TFS and are a .NET development team.
At my job we have a similar setup to you, lots of internal applications that use common libraries, and I have spent the best part of a year sorting this all out.
The first thing to note is that nothing you mentioned really has anything to do with TFS, but is really a symptom of the way your applications, and their components, are packaged and deployed.
Here are some ideas to get you started:
Setup automated/continuous builds
This is the first thing you need to do. Use the build facility in TFS if you must, or make the investment into something like TeamCity (which is great). Evaluate everything. Find something which you love and that everyone else can live with. The reason why you need to find something you love is because you will ultimately be responsible for it.
The reason why setting up automated builds is so important is because that's your jumping off point to solve the rest of your issues.
Setup automated deployment
Every deployable artifact should now be being built by your build server. No more manual deployment. No more deployment from workstations. No more visual studio Publish feature. It's hard to step away from this, but it's worth it.
If you have lots of web projects then look into either using web deploy which can be easily automated using either msbuild/powershell or go fancy and try something like octopus deploy.
Package common components using nuget
By now your common code should have its own automated builds, but how do you automatically deploy a common component? Package it up into nuget and either put it on a share for consumption or host it in a nuget server (TeamCity has one built in). A good build server can automatically update your nuget packages for you (if you always need to be on the latest version), and you can inspect which version you are referencing by checking your packages.config.
I know this is a lot to take in, but it is in its essence the fundamentals of moving towards continuous delivery (http://continuousdelivery.com/).
Please beware that getting this right will take you a long time, but that the process is incremental and you can evolve it over time. However, the longer you wait the harder it will be. Don't feel like you need to upgrade all your projects at the same time, you don't. Just the ones that are causing the most pain.
I hope this helps.
I'd just like to step outside the space of a specific solution for your problem and address the underlying desire you have to consolidate your workload.
Be aware that any patching/upgrading scenario will have costs that you must address - there is no magic pill.
Particularly, what you want to achieve will typically incur either a build/deploy overhead (as jonnii has outlined), or a runtime overhead (in validating the new versions to ensure everything works as expected).
In your case, because you have already built your products, I expect you will go the build/deploy route.
Just remember that even with binary equivalence (everything compiles, and unit tests pass), there is still the risk that the application will behave somehow differently after an upgrade, so you will not be able to avoid at least some rudimentary testing across all of your applications (the GAC approach is particularly vulnerable to this risk).
You might find it easier to accept that just because you have built a new version of a binary, doesn't mean that it should be rolled out to all web applications, even ones that are already functioning correctly (if something ain't broke...).
If that is acceptable, then you will reduce your workload by only incurring resource expense on testing applications that actually need to be touched.

deploying changes on a living drupal site

I really like drupal somehow. But what disturbs me most is that i can't figure out a clear way of deployment. Drupal stores a lot of stuff inside the database (views, cck, workflow, trigger etc) that needs to be updated.
I've seen some modules that could be used for this task (eg features) and I'm not sure if they are sufficient. Yet they are only for drupal6 and i currently have to work on a drupal5 site where updating is not yet an option.
Any ideas?
This is a weakness. Drupal doesn't have the developer tools built in that make development and deployment easy like Rails does (for example). One problem is Drupal isn't aware of it's environment natively. Secondly, there are too many different methods and modules that require special care. It can get very confusing. But things are getting better with drush and drush make.
I'm assuming here that you have a development environment on your local machine and a live or staging server you upload to.
The first thing you have to do is work out how to get your database fixture and your code to and from your server to your development environment very quickly. You need to make this proceedure as painless as possible so you can keep different versions of your site in sync without much effort. This will mean you will hopefully be able to manage less change every time you deploy. Hopefully...
Moving the database around isn't too hard. You could use phpMyadmin or mysqldump but the backup migrate module is my favorite tool.
To upload code from your local repository or site can be done in a few ways. If you use a version control system like git, you can commit on your local machine and check out again on the staging server. There are also special deployment tools like capistrano you should take a look at. (if you know this stuff already it may benefit others to read). If you're using FTP you should probably try something different.
If you're working with a site that is still in production, you can afford to make small incremental changes to your local site, then repeat on the live site and down load the new version of the database when your changes are in place. This does mean you double handle the database but can be a safe way of doing things. It keeps both your database closer to each other and minimises risk.
You can also export views backup to your server in either your code or importing them into your live site. There is a hack to get around deploying cck changes here: http://www.tinpixel.com/node/53 it works OK but cannot truly manage changes like rollbacks. (Respect to the guy who wrote that)
You can also use hook_updateN to capture changes and then run update.php to apply them. I worked on a d5 site with dozens of developers and this was the only way to keep things moving forward. This may be a good option if your site is live or if you need all database schema changes captured in a version control system (so you can roll back).
Also: Take a look at drush and drush make. These tools can be of great benefit. I can't remember how much support is for d5.
One final method of dealing with this is not to use cck or views (and use hook updates). But this is really only suitable for enterprise sites where you have big developer resources. This may seem like a strange suggestion but it can negate this whole problem completely.
Sorry I could not give you a clear answer. This is because one does not exist yet. You'll end up finding your own rhythm once you get into. Just keep backups of your database if you can roll back to them easy enough.

Deployment process for site maintained by 2 companies

I work for an agency that has been responsible for maintaining a client’s .net 3.5 website for a number of years along with another agency. Work is farmed out by the client to both agencies on a pretty much ad-hoc basis.
The site is quite old and has a structure and deployment process to match. The site is setup that developers have local copies of the sites. There is a staging environment, where client feedback and approval happens, followed by the live environment. There are a number of scenarios where work from one agency will be on the staging environment awaiting approval, and changes from the other agency need to go through staging, approval and deployed to live without the original changes being affected. Most of the time we get away with it but it’s far from ideal as not all conflicts can be resolved.
Up until recently we had still been on Sourcesafe but have moved over to Subversion and are running into many more scenarios where work is overwritten. This obviously isn’t a fault with subversion, rather that the locking of projects and files in Sourcesafe served as a good indicator to developers from both agencies that someone was working on that project or file. The process previously was that you checked out a file from sourcesafe and kept it checked out until changes went live (acknowledge that this is a rubbish process hence the desire to move away from sourcesafe and such a model)
The trouble is that even though we know that the way we do it now is bad, I’m at a bit of a loss as to how to restructure the overall site and deployment process to make it “better”. Some ideas we’ve pondered are:
Separate dev, test and live branches in subversion so we need to commit and build the appropriate branch before deploying (not really sure how to make that work)
Single repository for both agencies but a separate staging environment for each. Staging environment could then reflect the changes assigned to each agency
A separate instance of the staging site for each branch
Any suggestions of next steps or examples of similar situations and solutions from the SO community would be greatly appreciated!
Thanks
Joel
I would recomend:
Use git, its really very good for working out how to merge changes.
Have seperate staging enviroments for each company, then, once changes are approved, merge (carfully) into a final staging environmet that only exists to help sort our merging issues there, then push to live.
Also make sure both agencies know who's working on what and try to wait for the other agency to finish working on part X of the app until the other has finished, in the end this is the best solution, a little bit of comunication.

Resources