GitFlow suggests that when the feature is done it is merged into develop then at some point develop is merged into master.
What happens when you are working on code that is not approved for the next release but you still want to test it (and other similar future features) together?
You can't merge it to develop because then your feature will be prematurely pushed to master.
What do people do in this case?
Do you create an extra branch for merging these future features into in order to facilitate your testing?
Is there a naming convention for this?
According to Vincent Driessen (The Author of GitFlow Model), you have to merge all features to develop branch. Look at his own words:
The key moment to branch off a new release branch from develop is when develop (almost) reflects the desired state of the new release. At least all features that are targeted for the release-to-be-built must be merged in to develop at this point in time. All features targeted at future releases may not—they must wait until after the release branch is branched off.
I have a few doubts about this matter too (English issues), but what I usually do is like what He present in this image:
Take a look at the last feature. You can see that it is only merged at the second release of the example. So, when I have an unfinished feature (to be tested maybe), I just ignore it until next realese.
Futhermore, GitFlow is only a model (a successful one). And like all models, it may not be completely suitble for your application. You can always try new ideas like Vincent Driessen (the author) wisely did.
Take a try, and share with us any improviments.
Related
Git flow is an interesting idea and branch model to manage a big project. But, for example, I want to develop a big feature which contains some sub-features, or if two features in the model they have to depend on each other, what is the best way to manage these two features in this case?
Shall I just use one feature instead of two since they depend on each other. Or I should keep using git pull feature/other-feature to feature/this-feature to ensure the dependency is up to date.
And a lot of time, my team member doesn't fully test the feature and marks the features as finished too early. This unfinished feature, says feature/unfinished, will be removed and merged into develop branch. After other features merged into develop, we find out there are some bugs in feature/unfinished. In this case, shall we start a new feature to fix the bug or I revert the merge of unfinished feature and restart it. The problem of restarting a feature is that, if the later merged features actually depend on this unfinished feature and I revert the merge of unfinished feature, then the develop branch will be broken. Then what is the best practice of this case?
Thanks!
I am supposed to automate an application which is developed in PowerBuilder. In order to test this Application we are using Rational Robot as a functional testing tool. We expect at least 40 -50% of change control in the Application for each release. Release trends are scheduled at least 3 times in a year.
The product has different setup for each client. Accordingly scenario has been derived. Although if is there any change occurs, it would be in functional feature and also in interface. Pointing to that, need to proceed with automation. Identified few areas which are stabilized (i.e., where no major changes occurs) to automate. Will that be feasible for proceeding with Automation?
Could you please suggest me how to go about on this?
I've seen the answer to your last question involve a consultant spending two days interviewing the development team, then three days developing a report. And, in some cases, I'd say the report was preliminary, introductory, and rushed. However, let me throw out a few ideas that may help manage expectations on your team.
Testing automation is great for checking for regressions in functionality that allegedly isn't being touched. Things like framework changes or database changes can cause untouched code to crash. For risk-averse environments (e.g. banking, pharmaceutical prescriptions), the investment in automation is well worth the effort.
However, what I've seen often is an underestimation of the effort. To really test all the functionality in a unit (let's say a window, for example), you need to review the specifications, design tests that test each functional point, plan your data (what is the data going into the window, how will you make sure this data is as expected when you start the test each time, what is the data when you've finished your tests, how will you ensure the data, including non-visible data, is correct), then script and debug all your tests. I'm not sure what professional testers say (I'm a developer by trade, but have taken a course in an automated testing tool), but if you aren't planning for the same amount of effort to be expended on developing the automation tests as you're spending on application development for the same functionality, I think you'll quickly become frustrated. Add to that the changing functionality means changing testing scripts, and automated testing can become a significant cost. (So, tell your manager that automated testing doesn't mean you push a button and things get tested. < grin > )
That's not to say that you can't expend less effort on testing and get some results, but you get what you pay for. Having a script that opens and closes all the windows in the app provides some value, but it won't tell you that a new behaviour implemented in the framework is being overridden on window X, or that a database change has screwed up the sequencing of items in a drop down DataWindow, or a report completion time goes from five seconds to five hours. However, again, don't underestimate the effort. This is a new tool with a new language and idiosyncrasies that need to be figured out and mastered.
Automated testing can be a great investment. If the cost of failure is significant, like a badly prescribed drug causing a death, then the investment is worth it. However, for cases where there is a high turn over of functionality (like I think you're describing) and the consequences of a failure is less critical, you might want to consider comparing the cost/benefit of testing automation with additional manual testing resources.
Good luck,
Terry
Adding on to what #Terry said: It sounds like you are both new to automation in general and to Rational Robot in particular.
One thing to keep in mind is that test automation is software development and needs to be treated as such. That means you need personnel dedicated to the automation effort who are solid programmers and have expertise in the tool being used (Robot in this case).
If your team does not have the general programming/automation skills and the specific Robot skills, you are going to need to hire that personnel or get existing staff trained in those skill sets.
I religiously go through all of my code before I check it in and do a diff of the before and after code and read through it and make sure I undersatnd the changes. Usually I end up having to add comments, amend variable names, amend algorithms, amend code, retest things, discuss with other developers about their code, add new bugs/issues, but I very rarely end up doing the checkin immediately.
I do notice however that alot of developers these days seem to check their code in and think that when the build breaks that it is enough, then they go back and look at their changes. This is one of the things about continuous build systems that I definitely do not like, in that sometimes I think developers stop thinking about their code enough.
What best practices are there for ensuring only quality code goes into continuous build systems?
I do notice however that alot of developers these days seem to check their code in and think that when the build breaks that it is enough, then they go back and look at their changes.
In my opinion, using the continuous build for "verification" purpose is indeed a bad practice, developers should always try to not commit bad code that can affect the team and interrupt the work (the why is so obvious that if you don't get that, just look for another job). So, if your CI engine doesn't offer Pre-tested commit (like TeamCity, Team Foundation Server as I just saw, maybe Hudson one day, etc), you should always sync/build/sync (and rebuild if necessary) locally before to commit. Not doing this is laziness and not respectful of the team.
What best practices are there for ensuring only quality code goes into continuous build systems?
Just in case, remind why breaking the build is bad: bad code potentially affects the whole team and interrupts the work (sigh).
If you can get tool support (see the mentioned solutions above), get it.
If not, document the right workflow: 1. sync 2. build locally 3. sync 4. back to 2 if required 5. commit. Make this visible so that there is no excuses.
If this is an option, use something like (or similar to) the Hudson's Continuous Integration Game plugin. This can make things more fun.
I've seen people using a light financial "penalty" for broken build but I don't really like this idea. First, we should be able to behave as responsible adults. Second, the obtained result was that people started to delay commits (which at the end was the opposite of the expected benefit).
Team Foundation Server has something called pre checkin validation.
Let's assume one joins a project near the end of its development cycle. The project has been passed on across many teams and has been an overall free-for-all with no testing whatsoever taking place along the whole time. The other members on this team have no knowledge of testing (shame!) and unit testing each method seems infeasible at this point.
What would the recommended strategy for testing a product be at this point, besides usability testing? Is this normally the point where you're stuck with manual point-and-click expected output/actual output work?
I typically take a bottom-up approach to testing, but I think in this case you want to go top-down. Test the biggest components you can wrap unit-tests around and see how they fail. Those failures should point you towards what sub-components need tests of their own. You'll have a pretty spotty test suite when this is done, but it's a start.
If you have the budget for it, get a testing automation suite. HP/Mercury QuickTest is the leader in this space, but is very expensive. The idea is that you record test cases like macros by driving your GUI through use cases. You fill out inputs on a form (web, .net, swing, pretty much any sort of GUI), the engine learns the form elements names. Then you can check for expected output on the GUI and in the db. Then you can plug in a table or spreadsheet of various test inputs, including invalid cases where it should fail and run it through hundreds of scenarios if you like. After the tests are recorded, you can also edit the generated scripts to customize them. It builds a neat report for you in the end showing you exactly what failed.
There are also some cheap and free GUI automation testing suites that do pretty much the same thing but with fewer features. In general the more expensive the suite, the less manual customizition is necessary. Check out this list: http://www.testingfaqs.org/t-gui.html
I think this is where a good Quality Assurance test would come in. Write out old fashioned test cases and hand out to multiple people on the team to test.
What would the recommended strategy for testing a product be at this point, besides usability testing?
I'd recommend code inspection, by someone/people who know (or who can develop) the product's functional specification.
An extreme, purist way would be to say that, because it "has been an overall free-for-all with no testing whatsoever", therefore one can't trust any of it: not the existing testing, nor the code, nor the developers, nor the development process, nor management, nothing about the project. Furthermore, testing doesn't add quality to software (quality has to be built-in, part of the development process). The only way to have a quality product is to build a quality product; this product had no quality in its build, and therefore one needs to rebuild it:
Treat the existing source code as a throw-away prototype or documentation
Build a new product piece-by-piece, optionally incorporating suitable fragments (if any) of the old source code.
But doing code inspection (and correcting defects found via code inspection) might be quicker. That would be in addition to functional testing.
Whether or not you'll want to not only test it but also spend the extra time effort to develop automated tests depends on whether you'll want to maintain the software (i.e., in the future, to change it in any way and then retest it).
You'll also need:
Either:
Knowledge of the functional specification (and non-functional specification)
Developers and/or QA people with a clue
Or:
A small, simple product
Patient, forgiving end-users
Continuing technical support after the product is delivered
One technique that I incorporate into my development practice when entering a project at this time in the lifecycle is to add unit tests as defects are reported (by QA or end users). You won't get full code coverage of the existing code base, but at least this way future development can be driven and documented by tests. Also this way you should be assured that your tests fail before working on the implementation. If you write the test and it doesn't fail, the test is faulty.
Additionally, as you add new functionality to the system, start those with tests so that at least those sub-systems are tested. As the new systems interact with existing, try adding tests around the old boundary layers and work your way in over time. While these won't be Unit tests, these integration tests are better than nothing.
Refactoring is yet another prime target for testing. Refactoring without tests is like walking a tight rope without a net. You may get to the other side successfully, but is the risk worth the reward?
Further to my question at accidentally-released-code-to-live-how-to-prevent-happening-again. After client UAT we often have the client saying they are happy for a subset of features to be released while others they want in a future release instead.
My question is "How do you release 2/3 (two out of 3) of your features".
I'd be interested in how the big players such as Microsoft handle situations like..
"Right we're only going to release 45/50 of the initially proposed features/fixes in the next version of Word, package it and ship it out".
Assuming those 5 features not being released in the next release have been started.. how can you ignore them in the release build & deployment?
How do you release 2/3 of your developed features?
How to release a subset of deliverables?
-- Lee
If you haven't thought about this in advance, it's pretty hard to do.
But in the future, here's how you could set yourself up to do this:
Get a real version control system, with very good support for both branching and merging. Historically, this has meant something like git or Mercurial, because Subversion's merge support has been very weak. (The Subversion team has recently been working improving their merge support, however.) On the Windows side, I don't know what VC tools are best for something like this.
Decide how to organize work on individual features. One approach is to keep each feature on its own branch, and only merge it back to the main branch when the new feature is ready. The goal here is to keep the main branch almost shippable at all times. This is easiest when the feature branches don't sit around collecting dust—perhaps each programmer could work on only 1 or 2 features at a time, and merge them as soon as they're ready?
Alternatively, you can try to cherry-pick individual patches out of your version control history. This is tedious and error-prone, but it may be possible for certain very disciplined development groups who write very clean patches that make exactly 1 complete change. You see this type of patch in the Linux kernel community. Try looking at some patches on the Linux 2.6 gitweb to see what this style of development looks like.
If you have trouble keeping your trunk "almost shippable" at all times, you might want to read a book on agile programming, such as Extreme Programming Explained. All the branching and merging in the world will be useless if your new code tends to be very buggy and require long periods of testing to find basic logic errors.
Updates
How do feature branches work with continuous integration? In general, I tend to build feature branches after each check-in, just like the main branch, and I expect developers to commit more-or-less daily. But more importantly, I try to merge feature branches back to the main branch very aggressively—a 2-week-old feature branch would make me very, very nervous, because it means somebody is living off in their own little world.
What if the client only wants some of the already working features? This would worry me a bit, and I would want to ask them why the client only wants some of the features. Are they nervous about the quality of the code? Are we building the right features? If we're working on features that the client really wants, and if our main branch is always solid, then the client should be eager to get everything we've implemented. So in this case, I would first look hard for underlying problems with our process and try to fix them.
However, if there were some special once-in-a-blue-moon reason for this request, I would basically create a new trunk, re-merge some branches, and cherry-pick other patches. Or disable some of the UI, as the other posters have suggested. But I wouldn't make a habit of it.
This reminds me a lot of an interview question I was asked at Borland when I was applying for a program manager position. There the question was phrased differently — there's a major bug in one feature that can't be fixed before a fixed release date — but I think the same approach can work: remove the UI elements for the features for a future release.
Of course this assume that there's no effect of the features you want to leave out with the rest of what you want to ship... but if that's the case just changing the UI is easier than trying to make a more drastic change.
In practice what I think you would do would be to branch the code for release and then make the UI removals on that branch.
Its usually a function of version control, but doing something like that can be quite complicated depending on the size of the project and how many changesets/revisions you have to classify as being desired or not desired.
A different but fairly successful strategy that I've employed in the past is making the features themselves configurable and just deploying them as disabled for unfinished features or for clients who don't want to use certain features yet.
This approach has several advantages in that you don't have to juggle what features/fixes have been merged and have not been merged, and depending on how you implement the configuration and if the feature was finished at the time of deployment, the client can change their mind and not have to wait until a new release to take advantage of the additional functionality.
That's easy, been there done that:
Create a 2/3 release branch off your current mainline.
In the 2/3 release branch, delete unwanted features.
Stabilize the 2/3 release branch.
Name the version My Product 2.1 2/3.
Release from the 2/3 release branch.
Return to the development in the mainline.