CMS - How to work with multiple environments? Do I really need them? - sqlite

I've never worked with any CMS and I simply wanted to play with such ones. As originally I come from .NET roots, so I was thinking about choosing Orchard Core CMS.
Let's imagine very simple scenario, together with my colleague I'd like to create a blog. As I'm used to work with web based systems and applications for a business for me it's kinda normal to work with code repository, having multiple environments dev/test/stage/prod, implementing CI / CD, adjusting database via migrations or scripts.
Now the question is do I need all of this with working on our blog with a usage of CMS.
To be more specific I can ask few questions:
Shall I create blog using CMS locally (My PC) -> create few articles and then deploy it to the web or I should create a blog over the internet and add articles in prod environment directly.
How to synchronize databases between environments (dev / prod).
I can add, that as I do not expect many visitors on a website I was thinking to use Orchard Core CMS together with SQLite. Also I expect that I can customize code, add new modules, extend existing ones etc. - not only add content (articles). You can take that into consideration in answering the question
So basically my question is what should be the workflow of a person who want to create / administer and maintain CMS (let it be blog) as a single person or as a team.
Shall I work and create content locally, then publish it and somehow synchronize both application and database (database is my main question mark - also in a context how to do that properly using SQLite).
Or simply all the changes - code + content should be managed directly on a server let's call it production environment.
Excuse me if question is silly and hard to understand, but I'm looking for any advice as I really didn't find any good examples / information about that or maybe I'm seeking in totally wrong direction.
Thanks in advance.

Great question, not at all silly ;)
When dealing with a CMS, you need to think about the data/content in very different terms from the code/modules, despite the fact that the boundary between them is not always completely obvious.
For Orchard, the recommendation is not to install modules in production, but to have a dev - staging - production type of environment: install new modules on a dev environment, test them in staging, and then deploy to production when it's safe to do so. Depending on the scale of the project, the staging may be skipped for a more agile dev to prod setting but the idea remains the same, and is not very different from any modular application.
Then you have the activation and configuration of the settings of the modules you deploy. Because in a CMS like Orchard, those settings are considered data and stored in the database, they should be handled like content. This includes metadata such as the very shape of the content of your site: content types are data.
Data is typically not deployed like code is, with staging and prod environments (although it can, to a degree, more on that in a moment). One reason for this is that a CMS will often feature user-provided data, such as reviews, ratings, comments or usage stats. Synchronizing all that two-ways is very impractical. Another even more important reason is that the very reason to use a CMS is to let non-technical owners of the site manage content themselves in a fast and direct manner.
The difference between code and data is also visible in the way you secure their changes: for code, usual source control is still the rule, whereas for the content, you'll setup database backups.
Also important to mention is the structure of the database. You typically don't have to worry about this until you write your own modules: Orchard comes with a rich data migration feature that makes sure the database structure gets updated with the code that uses it. So don't worry about that, the database will just update itself as you deploy code to production.
Finally, I must mention that some CMS sites do need to be able to stage contents and test it before exposing it to end-users. There are variations of that: in some cases, being able to draft and preview content items is enough. Orchard supports that out of the box: any content type can be marked draftable. When that is not enough, there is an optional feature called Deployments that enables rich content deployment workflows that can be repeated, scheduled and validated. An important point concerning that module is that the deployment only applies to the subset of the site's content you decide it should apply to (and excludes, obviously, stuff like user-provided content).
So in summary, treat code and modules as something you deploy in a one-way fashion from the dev box all the way to production, with ordinary source control and deployment methods, and treat data depending on the scenario, from simple direct in production database instances with a good backup policy, to drafts stored in production, and then all the way to complex content deployment rules.

Related

Selective Continuous Integration with Git

My Django project's team is looking to have the designer's CSS in a central place, preferably on the production server (so that there's one "truth" to the current design, a model he claims that he's worked with in the past). Assuming that this is even a good practice, it would mean setting up Git to deploy the CSS in a Continuous Integration (CI) manner to production.
However, I would want to restrict Git somehow for the designer so that he doesn't accidentally update any files other than CSS or HTML. Python and Django files would be updated by developers, who would be deploying in a more traditional manner: working in their own branches and only having a human build manager
merging everything in to master when tested and ready.
Part of the reason that we want the designer to be able to deploy the CSS to a server is to avoid setting up the Django site locally on his laptop (he's not so technical outside of CSS, HTML, and Git).
Is this setup even a good idea? If not, what's the proper alternative?
Assuming that we set up a CI config off of the master branch, and allow the CSS to be pushed to master, can I even restrict the designer's ability to modify and check in non-CSS/HTML files? If so, how?
Is this setup even a good idea? If not, what's the proper alternative?
I have some reservations. It sounds like your designer is going to be the only person pushing changes to production without any gates: no code review, no tests, etc. Continuous integration is great, but a sane process includes safeties that prevent bad deploys. Since the rest of the team is following a different process, you'll end up managing two different pipelines. That's a waste of effort, and inevitably one of them (probably the designer's) falls apart due to lack of attention.
The alternative is put everyone on the same process. Teach the designer how to run the application locally, or build a harness that makes it easier. Unless your site is entirely static, how can they even see what their changes look like without that? Maybe it's more work to train them up, but it's an excellent opportunity for personal growth.
Assuming that we set up a CI config off of the master branch, and allow the CSS to be pushed to master, can I even restrict the designer's ability to modify and check in non-CSS/HTML files? If so, how?
If you go this route, you can use Git hooks to restrict what the designer is allowed to commit. You can either put a pre-commit hook on their client or, if you control the server, a pre-receive hook that runs for only the designer's user. Either one can look at the committed files and block the commit/push if any are not CSS or HTML. There's a pre-commit framework called Overcommit that might be helpful to you. If you're using a code review tool, most have places you can hook in a bot to leave a comment or block the merge when they've modified a file they shouldn't have.
Another option here is trust your coworker. Presumably they were hired because they're effective and useful, so you can save a lot of effort building up restrictions if instead everyone's clear on what they're supposed to be doing and generally doesn't screw it up.

Wordpress pages and version control

We are a software development company and are using Wordpress for static portion of the web site. Naturally, all our workflow is built around version control: multiple developers -> continious integration -> staging -> deployment.
Our challenge with integrating Wordpress into our workflow is that its database is stuck like a bone in the throat: you cannot put it into the version control, easily roll back, promote from staging to production etc.
I am wondering what people do in similar situations? I would like to find a way to integrate WP into the development workflow and not the other way around :-)
Clarification we want to "develop" and test pages on the staging system and when ready then move them over to the production as part of the version upgrade process. We don't want to do full replication of the staging database to production.
That's a common question and one that I've worked on tackling. I've written some code to address these issues albeit the code's not ready for distribution. Basically the idea is to create scripts to import the content and then version control the scripts. (Actually my approach uses a custom import/export format designed to be easy to hand-modify, but the idea is similar.)
Anyway, there are some related questions over on StackOverflow's sister site WordPress Answers:
Questions tagged with the term [staging]
Questions tagged with the term [deploy]
UPDATE
Per the clarification, this would probably be helpful too:
Is there any way to draft a revision of a published page or post? What workarounds have you used?
Hope this helps.
-Mike
I've just hit the same problem. For now we are using MySQL dump files to export/import database content, but it gets ugly with several people working on the database changes.
Since the team that works on the project is all internal and consists of just a few people, I'm thinking into the direction of locking the database dump file in VCS. Subversion had this functionality built-in, but we are using git, which, I think, is conceptually opposite of any kind of locking.
Probably we'll have a workaround script with pre-commit hook to check for the existence of a lock file next to the dump. The person who committed the lock file will be the only one allowed to commit the dump. Once he finishes the work, he will need to commit the removal of the lock file.
It sounds ugly, I know. But I've thought about it for a while and don't see an elegant solution yet.
If you're only using WordPress for static content, then any tool/methodology for version controlling databases should work - for example, work the mysql command line tools into your CI and deployment routines.

deploying changes on a living drupal site

I really like drupal somehow. But what disturbs me most is that i can't figure out a clear way of deployment. Drupal stores a lot of stuff inside the database (views, cck, workflow, trigger etc) that needs to be updated.
I've seen some modules that could be used for this task (eg features) and I'm not sure if they are sufficient. Yet they are only for drupal6 and i currently have to work on a drupal5 site where updating is not yet an option.
Any ideas?
This is a weakness. Drupal doesn't have the developer tools built in that make development and deployment easy like Rails does (for example). One problem is Drupal isn't aware of it's environment natively. Secondly, there are too many different methods and modules that require special care. It can get very confusing. But things are getting better with drush and drush make.
I'm assuming here that you have a development environment on your local machine and a live or staging server you upload to.
The first thing you have to do is work out how to get your database fixture and your code to and from your server to your development environment very quickly. You need to make this proceedure as painless as possible so you can keep different versions of your site in sync without much effort. This will mean you will hopefully be able to manage less change every time you deploy. Hopefully...
Moving the database around isn't too hard. You could use phpMyadmin or mysqldump but the backup migrate module is my favorite tool.
To upload code from your local repository or site can be done in a few ways. If you use a version control system like git, you can commit on your local machine and check out again on the staging server. There are also special deployment tools like capistrano you should take a look at. (if you know this stuff already it may benefit others to read). If you're using FTP you should probably try something different.
If you're working with a site that is still in production, you can afford to make small incremental changes to your local site, then repeat on the live site and down load the new version of the database when your changes are in place. This does mean you double handle the database but can be a safe way of doing things. It keeps both your database closer to each other and minimises risk.
You can also export views backup to your server in either your code or importing them into your live site. There is a hack to get around deploying cck changes here: http://www.tinpixel.com/node/53 it works OK but cannot truly manage changes like rollbacks. (Respect to the guy who wrote that)
You can also use hook_updateN to capture changes and then run update.php to apply them. I worked on a d5 site with dozens of developers and this was the only way to keep things moving forward. This may be a good option if your site is live or if you need all database schema changes captured in a version control system (so you can roll back).
Also: Take a look at drush and drush make. These tools can be of great benefit. I can't remember how much support is for d5.
One final method of dealing with this is not to use cck or views (and use hook updates). But this is really only suitable for enterprise sites where you have big developer resources. This may seem like a strange suggestion but it can negate this whole problem completely.
Sorry I could not give you a clear answer. This is because one does not exist yet. You'll end up finding your own rhythm once you get into. Just keep backups of your database if you can roll back to them easy enough.

Working with version control on a Drupal/CMS project

I was wondering how teams that develop sites using Drupal (or any other CMS) integrate version control, subversion, git or similar, into their workflow. You'd obviously want your custom code and theme files under version control but when you use a CMS such as Drupal a lot of the work consists of configuring modules and settings all of which is stored in the database.
So when you are a team of developers, how do you collaborate on a project like this? Dumping the database into a file and putting that file under version control might work I guess, but when the site is live the client is constantly adding content which makes syncing a bit problematic.
I'd love to know how others are doing this.
You are correct that this is an issue for Drupal--version control works fine until you turn the site over to your client or open it up to users.
Your question seems like a more specific version of this one, which touched on version control in the Drupal workflow. You may find some answers there that help.
For some projects, I have exported all of the views to code, using that feature of the Views module, and I have one project where all of the blocks have been exported, as well. (Although that was a development exercise and not a customary thing to do with blocks.)
Take a look at the work that Development Seed is doing to work around this problem. They are leading the development of the Context, Features, and Spaces modules that work together to store configuration data in modules (outside of the DB) so that it can be versioned with the code.
There is a Drupal group called Packaging & Deployment for discussing the various solutions that are being developed for this issue.
Right now there are a lot of efforts towards creating something that will handle the dev -> production difficulties with drupal in relation to the database. Features, that flaminglogos mentioned is one, but I feel that is more focused on creating stand alone projects, ie ones that would be installed on many sites.
For simple maintaining you dev and prod databases I'd take a look at http://drupal.org/project/deploy and http://drupal.org/project/dbscripts. They support syncing and merging db side drupal config data.
I can't guarantee they are ready for prime time though...
There is a lot of effort of shipping the next drupal version with configuration in code. That's is the key to have it in a version system.
For now you can use the features module, with that you can export things like content types, views, etc. to code, and then compare, version and revert it as you need.

How do you deploy and manage a C# web application to a customer with some minor differences from your base project?

I have a semi-large web application that we run locally and I need to deploy it at another location. The second location will require some slight modifications to the project (especially cosmetic). How do you manage these differences and what do you use to distribute the site and updates to a customer like this?
Edit:
Right now our web app runs in-house and we build with Cruise Control .NET and MSBuild with WDP. What would be a good option for deployment to the customer? We will not be updating their site for them so a solution that is simple for them to deploy and update is desirable.
Branch your code.
Hopefully your code is source controlled (if not, start now!), you should branch from the base to the "Customer X" branch and just make the slight cosmetic modifications in that branch. Then just build and deploy off of that branch for that customer.
Additionally, if the changes are minor enough, you could try to make the changes configurable. That way you could deploy the same site everywhere, and just change the configuration to match what the customer wants. The more complex the differences, the harder it will be to make them configurable though.
After reviewing comments: Its good to note that configuration is practical, but ONLY if the # of changes are minor, otherwise you will pollute your code with configuration logic. (Thanks commenters)
So: Lots of changes --> Branch (more maintainable), few minor changes --> Make configurable (more practical).
We have to do it all the time. We try to generalize and make the differences between versions configurable. The most common reasons for customizations are:
additional database fields: we implemented a dynamic way to store these items in db
UI layout: we have special folders where we put images and css files which are loaded on demand
different mandatory input fields: we store the definition in the configuration and activate them programmatically
special reports: we put the template files in the custom folder in order to be chosen instead of the standard template
Some changes require programming new modules. We code them in a custom library which will be dynamically loaded inside the main application.
We usually make those differences by being data driven. The customer's difference is just a different setting; any other user in the future could as well reuse the same "custom" options later on.
Creating "one-off"s doesn't scale.
Custom patches are a pain for this very reason. We typically just branch in our source control system, and manually apply the changes after updating with a script. Because of the additional overhead, we discourage custom patches as much as possible.

Resources