Further to my question at accidentally-released-code-to-live-how-to-prevent-happening-again. After client UAT we often have the client saying they are happy for a subset of features to be released while others they want in a future release instead.
My question is "How do you release 2/3 (two out of 3) of your features".
I'd be interested in how the big players such as Microsoft handle situations like..
"Right we're only going to release 45/50 of the initially proposed features/fixes in the next version of Word, package it and ship it out".
Assuming those 5 features not being released in the next release have been started.. how can you ignore them in the release build & deployment?
How do you release 2/3 of your developed features?
How to release a subset of deliverables?
-- Lee
If you haven't thought about this in advance, it's pretty hard to do.
But in the future, here's how you could set yourself up to do this:
Get a real version control system, with very good support for both branching and merging. Historically, this has meant something like git or Mercurial, because Subversion's merge support has been very weak. (The Subversion team has recently been working improving their merge support, however.) On the Windows side, I don't know what VC tools are best for something like this.
Decide how to organize work on individual features. One approach is to keep each feature on its own branch, and only merge it back to the main branch when the new feature is ready. The goal here is to keep the main branch almost shippable at all times. This is easiest when the feature branches don't sit around collecting dust—perhaps each programmer could work on only 1 or 2 features at a time, and merge them as soon as they're ready?
Alternatively, you can try to cherry-pick individual patches out of your version control history. This is tedious and error-prone, but it may be possible for certain very disciplined development groups who write very clean patches that make exactly 1 complete change. You see this type of patch in the Linux kernel community. Try looking at some patches on the Linux 2.6 gitweb to see what this style of development looks like.
If you have trouble keeping your trunk "almost shippable" at all times, you might want to read a book on agile programming, such as Extreme Programming Explained. All the branching and merging in the world will be useless if your new code tends to be very buggy and require long periods of testing to find basic logic errors.
Updates
How do feature branches work with continuous integration? In general, I tend to build feature branches after each check-in, just like the main branch, and I expect developers to commit more-or-less daily. But more importantly, I try to merge feature branches back to the main branch very aggressively—a 2-week-old feature branch would make me very, very nervous, because it means somebody is living off in their own little world.
What if the client only wants some of the already working features? This would worry me a bit, and I would want to ask them why the client only wants some of the features. Are they nervous about the quality of the code? Are we building the right features? If we're working on features that the client really wants, and if our main branch is always solid, then the client should be eager to get everything we've implemented. So in this case, I would first look hard for underlying problems with our process and try to fix them.
However, if there were some special once-in-a-blue-moon reason for this request, I would basically create a new trunk, re-merge some branches, and cherry-pick other patches. Or disable some of the UI, as the other posters have suggested. But I wouldn't make a habit of it.
This reminds me a lot of an interview question I was asked at Borland when I was applying for a program manager position. There the question was phrased differently — there's a major bug in one feature that can't be fixed before a fixed release date — but I think the same approach can work: remove the UI elements for the features for a future release.
Of course this assume that there's no effect of the features you want to leave out with the rest of what you want to ship... but if that's the case just changing the UI is easier than trying to make a more drastic change.
In practice what I think you would do would be to branch the code for release and then make the UI removals on that branch.
Its usually a function of version control, but doing something like that can be quite complicated depending on the size of the project and how many changesets/revisions you have to classify as being desired or not desired.
A different but fairly successful strategy that I've employed in the past is making the features themselves configurable and just deploying them as disabled for unfinished features or for clients who don't want to use certain features yet.
This approach has several advantages in that you don't have to juggle what features/fixes have been merged and have not been merged, and depending on how you implement the configuration and if the feature was finished at the time of deployment, the client can change their mind and not have to wait until a new release to take advantage of the additional functionality.
That's easy, been there done that:
Create a 2/3 release branch off your current mainline.
In the 2/3 release branch, delete unwanted features.
Stabilize the 2/3 release branch.
Name the version My Product 2.1 2/3.
Release from the 2/3 release branch.
Return to the development in the mainline.
Related
GitFlow suggests that when the feature is done it is merged into develop then at some point develop is merged into master.
What happens when you are working on code that is not approved for the next release but you still want to test it (and other similar future features) together?
You can't merge it to develop because then your feature will be prematurely pushed to master.
What do people do in this case?
Do you create an extra branch for merging these future features into in order to facilitate your testing?
Is there a naming convention for this?
According to Vincent Driessen (The Author of GitFlow Model), you have to merge all features to develop branch. Look at his own words:
The key moment to branch off a new release branch from develop is when develop (almost) reflects the desired state of the new release. At least all features that are targeted for the release-to-be-built must be merged in to develop at this point in time. All features targeted at future releases may not—they must wait until after the release branch is branched off.
I have a few doubts about this matter too (English issues), but what I usually do is like what He present in this image:
Take a look at the last feature. You can see that it is only merged at the second release of the example. So, when I have an unfinished feature (to be tested maybe), I just ignore it until next realese.
Futhermore, GitFlow is only a model (a successful one). And like all models, it may not be completely suitble for your application. You can always try new ideas like Vincent Driessen (the author) wisely did.
Take a try, and share with us any improviments.
Git flow is an interesting idea and branch model to manage a big project. But, for example, I want to develop a big feature which contains some sub-features, or if two features in the model they have to depend on each other, what is the best way to manage these two features in this case?
Shall I just use one feature instead of two since they depend on each other. Or I should keep using git pull feature/other-feature to feature/this-feature to ensure the dependency is up to date.
And a lot of time, my team member doesn't fully test the feature and marks the features as finished too early. This unfinished feature, says feature/unfinished, will be removed and merged into develop branch. After other features merged into develop, we find out there are some bugs in feature/unfinished. In this case, shall we start a new feature to fix the bug or I revert the merge of unfinished feature and restart it. The problem of restarting a feature is that, if the later merged features actually depend on this unfinished feature and I revert the merge of unfinished feature, then the develop branch will be broken. Then what is the best practice of this case?
Thanks!
I religiously go through all of my code before I check it in and do a diff of the before and after code and read through it and make sure I undersatnd the changes. Usually I end up having to add comments, amend variable names, amend algorithms, amend code, retest things, discuss with other developers about their code, add new bugs/issues, but I very rarely end up doing the checkin immediately.
I do notice however that alot of developers these days seem to check their code in and think that when the build breaks that it is enough, then they go back and look at their changes. This is one of the things about continuous build systems that I definitely do not like, in that sometimes I think developers stop thinking about their code enough.
What best practices are there for ensuring only quality code goes into continuous build systems?
I do notice however that alot of developers these days seem to check their code in and think that when the build breaks that it is enough, then they go back and look at their changes.
In my opinion, using the continuous build for "verification" purpose is indeed a bad practice, developers should always try to not commit bad code that can affect the team and interrupt the work (the why is so obvious that if you don't get that, just look for another job). So, if your CI engine doesn't offer Pre-tested commit (like TeamCity, Team Foundation Server as I just saw, maybe Hudson one day, etc), you should always sync/build/sync (and rebuild if necessary) locally before to commit. Not doing this is laziness and not respectful of the team.
What best practices are there for ensuring only quality code goes into continuous build systems?
Just in case, remind why breaking the build is bad: bad code potentially affects the whole team and interrupts the work (sigh).
If you can get tool support (see the mentioned solutions above), get it.
If not, document the right workflow: 1. sync 2. build locally 3. sync 4. back to 2 if required 5. commit. Make this visible so that there is no excuses.
If this is an option, use something like (or similar to) the Hudson's Continuous Integration Game plugin. This can make things more fun.
I've seen people using a light financial "penalty" for broken build but I don't really like this idea. First, we should be able to behave as responsible adults. Second, the obtained result was that people started to delay commits (which at the end was the opposite of the expected benefit).
Team Foundation Server has something called pre checkin validation.
Although I am using drupal since the D4 series, I only started developing professionally for it with D6, so - despite I did various site upgrades - I was never faced by the task of having to port my own code to a new version.
I know the Drupal community will come up with lot of technical support about changed API's and architectural changes (see the deadwood module for D5-D6 or even these stubs of D6-D7 how-to's for modules and themes).
However what I am looking for with my question is more in the line of strategy thinking, or in other words, I am looking for inputs and advice on how to plan / implement / review the process of porting my own code, in the light of what colleague developers learned by previous experience. Some example:
Would you advice to begin to port my modules as soon as I have time for doing it, and to maintain a concurrent D7 for some time (so I am "prepared" for the D-day) or would you advice to rather wait for the day in which the port will be actually imminent and then upgrade the modules to D7 and drop the D6 version?
Only some of my modules have full test coverage. Would you advice to complete test coverage for the D6 version so to have all tests working to check the D7 port, or would you advice to write my test directing at porting time, to test the D7 version?
Did you find that being an early adopter gives you an edge in terms of new features and better API's or did you rather find that is more convenient to delay the conversion so as to leverage the larger amount of readily available contrib modules?
Did you set for yourself quality standards / evaluation criteria or did you just set the bar to "if it works, I'm happy"? Why? If you set certain standards or goals, what did they where / what will they be? How did they help you?
Are there common pitfalls that you experienced in the past and that you think are applicable to the D6-D7 porting process?
Is porting a good moment to do some refactoring or it is just going to make everything more complex to be put back together?
...
These questions are not an exhaustive list, but I hope they give an idea of what kind of information I am looking for. I would rather say: whatever you think is relevant and I did not list above gets a "plus"! :)
If I did not manage to express myself clearly enough, please post a comment with the info you think I should add in the question. Thank you in advance for your time!
PS: Yes I know... D7 is not yet out and it will take months before important contrib modules will be upgraded... but it's never too early to start thinking! :)
Good questions, so let's see:
(when to start porting)
This certainly depends on the complexity of the modules to port. If there are really complex/large ones, it might be useful to start early in order to find tricky spots while not being under pressure. For smaller/standard ones, I'd try to find a bigger time slot later on where I can port many of them in a row in order to get the routine stuff memorized quickly (and benefit from the probably improved documentation).
(test coverage)
Normally I'd say that having a good test coverage before starting refactoring/porting would certainly be advisable. But given that Drupal-7 introduces a major change concerning the testing framework by moving it to core, I'd expect the need to rewrite a significant amount of tests anyway. So if there is no need to maintain the Drupal-6 versions after the migration, I'd save the time/trouble and aim for increased coverage after the porting.
(early adopter vs. wait and see)
Using Drupal since the 4.7 version, we have always waited for at least the first official release of a new major version before even thinking about porting. With Drupal 6, we waited for the views module before porting our first site, and we still have some smaller projects on Drupal-5, as they are working just fine and it would be hard to justify the extra bill for our clients as long as there are still maintenance/security fixes for it. There is just so much time in a day and there is always this backlog of bugs to fix, features to add, etc., so no use playing with unfinished technology while there are more imminent things to do that would immediately benefit our clients. Now this would certainly be different if we'd have to maintain one or more 'official' contributed modules, as offering an early port would be a good thing.
I'm a bit in a bind here - being an early adopter certainly benefits the community, as someone has to find that bugs before they can get fixed, but on the other hand, it makes little business sense to fight hour after hour with bugs others might have found/fixed if you'd just waited a bit longer. As long as I have to do this for a living, I need to watch my resources, trying to strike an acceptable balance between serving the community and benefiting from it :-/
(quality standards)
"If it works, I'm happy" just doesn't cut it, as I don't want to be happy momentarily only, but tomorrow as well. So one of my quality standards is that I need to be (somewhat) certain that I 'grokked' new concepts well enough in order to not just makes things work, but make them work like they should. Now this is hard to define more precisely, as it is obviously impossible to know if one 'got it' before 'getting it', so it boils down to a gut feeling/distinction of 'yeah, it kinda works' vs. 'yup, that looks right', and one has to accept that he will quite regularly be wrong about this.
That said, one particular point I'm looking out for is 'intervene as early as possible'. As a beginner, I often tweaked stuff 'after the fact' during the theming stage, while it would have been much easier to apply the 'fix' earlier in the processing chain by means of one hook or the other. So right now, whenever I'm about to 'adjust' something in the theme layer, I deliberately take a small time out to check if this can not be done more cleanly/compatible within a hook earlier on. As I expect Drupal-7 to add even more hooking options, this is something I will pay extra attention to, as it usually reduces conflicts and sudden 'breaking of stuff' when adding new modules.
(common pitfalls)
Well - mainly porting to early, finding out afterwards/in between that one or more needed modules were not available for the new version at all, or only in dev/alpha/early beta state. So I'd make sure to compile a complete list of used/needed modules first, listing their porting state, along with a quick inspection of their issue queues.
Besides that, I have so far always been very pleased with the new versions and their improvements, and I'm looking forward for Drupal-7 again.
(refactoring while porting)
One could say that porting is a rather large refactoring in itself, so there is no need to add to the complexity by restructuring non porting related stuff. On the other hand, if you already have to shred your modules to pieces anyway, why not use the opportunity to make it a major overhaul? I'd try to draw a line based on complexity - for big/complex modules, I'd do the port as straight as possible, and refactor more later on, if need be. For smaller modules, it shouldn't really matter, as the likelihood of introducing subtle bugs should be rather small.
(other stuff)
... need to think about it ...
Ok, other stuff:
Resource needs - given some of the Drupal-7 threads, it looks like they are likely to go up, so this should be evaluated before porting smaller sites that sit on a shared/restricted hosting account.
Latest versions first - This one is rather obvious and always stressed in the upgrade guides, but nevertheless worth mentioning: Upgrade core and all modules to their latest current version first before doing a major upgrade, as the upgrade code is highly likely to depend on the latest table/data structures to work correctly. Given Drupals 'piecemeal', one step at a time update strategy, it would be very hard to implement upgrade code that would detect different pre-upgrade states and acted accordingly.
Let's assume one joins a project near the end of its development cycle. The project has been passed on across many teams and has been an overall free-for-all with no testing whatsoever taking place along the whole time. The other members on this team have no knowledge of testing (shame!) and unit testing each method seems infeasible at this point.
What would the recommended strategy for testing a product be at this point, besides usability testing? Is this normally the point where you're stuck with manual point-and-click expected output/actual output work?
I typically take a bottom-up approach to testing, but I think in this case you want to go top-down. Test the biggest components you can wrap unit-tests around and see how they fail. Those failures should point you towards what sub-components need tests of their own. You'll have a pretty spotty test suite when this is done, but it's a start.
If you have the budget for it, get a testing automation suite. HP/Mercury QuickTest is the leader in this space, but is very expensive. The idea is that you record test cases like macros by driving your GUI through use cases. You fill out inputs on a form (web, .net, swing, pretty much any sort of GUI), the engine learns the form elements names. Then you can check for expected output on the GUI and in the db. Then you can plug in a table or spreadsheet of various test inputs, including invalid cases where it should fail and run it through hundreds of scenarios if you like. After the tests are recorded, you can also edit the generated scripts to customize them. It builds a neat report for you in the end showing you exactly what failed.
There are also some cheap and free GUI automation testing suites that do pretty much the same thing but with fewer features. In general the more expensive the suite, the less manual customizition is necessary. Check out this list: http://www.testingfaqs.org/t-gui.html
I think this is where a good Quality Assurance test would come in. Write out old fashioned test cases and hand out to multiple people on the team to test.
What would the recommended strategy for testing a product be at this point, besides usability testing?
I'd recommend code inspection, by someone/people who know (or who can develop) the product's functional specification.
An extreme, purist way would be to say that, because it "has been an overall free-for-all with no testing whatsoever", therefore one can't trust any of it: not the existing testing, nor the code, nor the developers, nor the development process, nor management, nothing about the project. Furthermore, testing doesn't add quality to software (quality has to be built-in, part of the development process). The only way to have a quality product is to build a quality product; this product had no quality in its build, and therefore one needs to rebuild it:
Treat the existing source code as a throw-away prototype or documentation
Build a new product piece-by-piece, optionally incorporating suitable fragments (if any) of the old source code.
But doing code inspection (and correcting defects found via code inspection) might be quicker. That would be in addition to functional testing.
Whether or not you'll want to not only test it but also spend the extra time effort to develop automated tests depends on whether you'll want to maintain the software (i.e., in the future, to change it in any way and then retest it).
You'll also need:
Either:
Knowledge of the functional specification (and non-functional specification)
Developers and/or QA people with a clue
Or:
A small, simple product
Patient, forgiving end-users
Continuing technical support after the product is delivered
One technique that I incorporate into my development practice when entering a project at this time in the lifecycle is to add unit tests as defects are reported (by QA or end users). You won't get full code coverage of the existing code base, but at least this way future development can be driven and documented by tests. Also this way you should be assured that your tests fail before working on the implementation. If you write the test and it doesn't fail, the test is faulty.
Additionally, as you add new functionality to the system, start those with tests so that at least those sub-systems are tested. As the new systems interact with existing, try adding tests around the old boundary layers and work your way in over time. While these won't be Unit tests, these integration tests are better than nothing.
Refactoring is yet another prime target for testing. Refactoring without tests is like walking a tight rope without a net. You may get to the other side successfully, but is the risk worth the reward?