Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We are a small team of 3 developers (2 experienced but new to this particular business sector) developing a functionally complex product. We're using Scrum and have a demo at the end of each sprint. Its clear that the functional team have plenty of ideas but these are not well communicated to the development team and the demo poses more questions than answers.
Have you any recommendations for improving the the quality of input from the functional people?
Further info: I think part of the problem is that there are no specs or User Stories as such. Personally I think they need to be writing down some sort of requirements - what sort of things should they be writing down and to what complexity given its an agile process?
Have you tried working with your customer to define / formulate acceptance tests?
Using something like Fit to come up with these tests - would result in better specs as well as force the customer to think about what is really required. The icing on the cake is instant-doc-executable specs at the end of this process.
That is of course, if your customers are available and open to this approach. Give it a try!
If not (and that seems to be the majority - because it is less work) - calendar flash 'em - schedule meetings/telecons every week until they sing like canaries :) +1 to Dana
Sometimes the easiest way to get input from people is to force it out of them. My company used SCRUM on a project, and found very quickly that people tend to keep to themselves when they already know what they're doing. We ended up organizing weekly meetings where team members were required to display something that was learned during the week. It was forced, but it worked pretty well.
I'm a big believer in Use Cases, detailing the system behaviour in response to user actions. Collectively these can form a loose set of requirements, and in a SCRUM environment can help you prioritise the Use Cases which will form that particular sprint's implemented features.
For example, after talking to your functional team you identify 15 separate Use Cases. You prioritise the Use Cases, and decided to plan for 5 sprints. And the end of each sprint you go through and demo the product fulfilling the Use Cases implemented during the sprint, noting the feedback and amending the Use Cases.
I understand that the people you call functional people are acting as Product Owners, right?
I think part of the problem is that there are no specs or User Stories as such. Personally I think they need to be writing down some sort of requirements - what sort of things should they be writing down and to what complexity given its an agile process?
Actually, without having any specs you probably have no acceptance test for the backlog itens as well. You should ask the PO to write the user stories, I like the "As a - type of user -, I want -some goal- so that -some reason-." form. Keep in mind that the User Stories shall be INVEST - Independent, Negotiable, Valuable to users or customers, Estimable, Small and Testable. What is a must is to have the Acceptance tests written together with the story so that the team should know what the story must be able to do in order do be set as done.
Remember that as the product evolves, it's expected to the PO have ideas as he sees the working product. It's not a bad thing, actually it is one of the best thing you can get through Agile. What you have to pay attention is that this ideas mus be included in the product backlog and it needs to be prioritized by th PO. And, if it's necessary and will add value to the customer, the idea should be planned to be built in the next sprint.
Someone from the functional team should be part of the team and available to answer your questions about the features you're adding.
How can you estimate the Backlog item if they are not detailled enough ?
You could establissh a rule that Backlog item that do not have clear acceptance criteria cannot be planned.
If would be better to have someone from the functional team acting as Product Owner, to determine, choose and priotitize the Backlog items, and/or as Domain Expert.
Also, make sure everyone in both the functional team and the development team speaks the same language, so as to avoid misunderstandings ; See ubiquitous language.
Track the time most waiting for answers from the functional team as well as he time wasted developping unnecessary features or reworking existing features so that they fits the bill.
Are they participating in the stand-up meetings?
You could propose to have a representative at each (or some) of them, to ask them for input before the end of the sprint
Are you doing stand-up meetings and do you have burn down chart? I think those two areas would benefit you greatly.
I recommend the book "Practices of an agile developer" it is full of suggestions how to make a scrum team successful. It also gives good tips how to get the product owner/customer more involved and how to get the whole process rolling. It's worth the money IMHO.
I agree that you need some sort of requirements (user stories or else).
One piece of advice I can give is to use some sort of visual aids with the functional teams. When customers have plenty of ideas (as you've said) they usually also have a visual idea of what a feature looks like, when the developed product doesn't fit this visual idea it creates a lot of doubts, even if it does the job functionally.
When discussing functionality with customers, I try to be very visual. Drawing sketches on a board, or even verbally describing what something would look like. Trying to find a common visual image. You can then take a photo of the sketches and use them as part of the documentation.
Another advice is to keep your sprints as short as possible, so that you do more frequent demos. But you may already be doing this, since you didn't mention your current sprint duration.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I'm looking for a team building / training activity for some of my scrum teams. I want something that really illustrates the flexibility that the team has when implementing stories to define the scope and complexity of the feature themselves. Most of the teams have long-term waterfall experience and are used to having a well-defined specification. I'm looking for something that illustrates the need for the team to vary the scope of what they are building themselves, dependent on the time and resources available.
I couldn't find anything at tastycupcakes.com and Google wasn't much help. Maybe someone has prepared something themselves they would care to share?
Edit (in response to request for example in comments)
Suppose the team has committed to building a story for displaying data to a user in a paged list for analysis purposes. The acceptance criteria can be fulfilled easily but a differnet implementation might provide added functionality e.g. wrapping a third party control which has built-in sorting and grouping functionality.
The point is, because the scrum time window is absolutely fixed the scope of the implementation may be pushed if the team feels they are ahead of schedule, especially if some technical designs proved less problematic than thought. Conversely, if some tasks have taken longer than anticipated, the team can short-cut the user story while still making sure what they delivers satisfies the acceptance criteria.
The thing I am trying to get away from is the current mindset that the feature has a specification set in stone, and that's what will be built, whatever the circumstances.
I don't think it is up to the team to define the scope and complexity of a story. It is the PO's job to define the conditions of acceptance and then it is the team's job to estimate size based on the PO's description. If the stories are right sized, the conditions are usually pretty tightly defined. This could be why you aren't seeing much out there ....
EDIT:
I don't think your example changes my answer. If the PO wanted this "additional functionality" such as sorting etc, they would have defined it in the story or in another story. To build something that isn't asked for is waste. Spending time on a story that is low priority in the backlog is inefficient. Agile is based on building what is needed and only what is needed in order of importance. So I would frown on developers adding "extra goodies" just because they are working on a particular screen.
That does not mean you shouldn't look over all the stories in the backlog and make architectural plans based on what will be needed in the future.
I think I get what you're looking for, but feel free to clarify if I'm mistaken. I'm under the impression you're looking for an exercise that will show the flexibility in implementation details the team has when using user stories.
If so, try an exercise like this.
Split the team into two groups and have the same Product Owner between them (or you can have one Product Owner for each group if both PO's know the exercise).
The PO presents a fictional story like, "As an executive at BigSales Co, I want to be able to see, at a glance, which salespeople are performing and which are not, so that I can pair performers with under-performers to improve the overall team performance."
A story like the one above is light on implementation details, but has a very clear business problem to be solved (as user stories should). Using a story like this, give the teams 30 minutes to work on a paper prototype that would satisfy the user story. They can interact as much as they want with the PO during this time frame. The person playing the PO should be careful not to give them implementation details, but leave it to the team to decide, while expressing and clarifying the business need.
At the end of the 30 minutes, have each team present their solution and explain how it satisfies the user story.
The important thing here, is that once both teams have presented, it is likely that both presentations will be quite different and yet both valid. This shows the level of flexibility the team has to provide what they feel is the best solution without having to be told explicitly what to do.
Hope this helps.
In order to estimate the story cost the team will be expecting to work with the PO to define, in at least broad terms, the requirements for that feature. In the example you gave the team may explicitly ask the PO if the sorting & grouping functionality is needed. If they say no, as the PO can't see a use for it at that stage, then the estimate is given on that basis and the implementation done according to that. No consideration is given to these additional features on the YAGNI principle. If the requirement for the sorting & grouping comes up subsequently as a result of people using early incarnations of the product, well, that's another story, and is estimated & scheduled into the backlog accordingly. The scope of the implementation of a story isn't changed just because you've got some time left in an iteration - instead you simply pull the next prioritised item from the backlog and get on with that.
Of course, when implementing the story the team are at liberty to use the most time/cost effective method that they consider suitable for the evolving product. If this means using an component with additional capability i.e. a superset of the features then they could do so (unless this is in breach of non-functional requirements), as long as the acceptance criteria are passed, but they shouldn't go deliberately adding in unrequested functionality just because they've got some time spare in an iteration.
My opinion is somewhere between your description of adapting the features to the time, which is left, and the "just fulfill the acceptance criteria and that's it" POV of the two other commentators...
In my point of view, you all should recall the formal setup of an user story:
As a -role- I want -feature-, so -aim-.
Given the purpose of a desired feature, the developer can better understand, what the PO really wants. He then can come up with additional ideas and ask the PO, e.g.:
Hey PO, if you want -aim- so why don't we do -alternative/addition to feature-. Wouldn't that be even better?
And the PO may agree and the story is implemented as described, but in another interpretation, or the story maybe adapted. The points that is important to me:
The PO describes the purpose, he would like to have fulfilled, and a feature that is appropriate to do so
The team does not just implement the acceptance criterias like development zombies, but they are open minded and are tuned to the PO's vision in general and the single story purpose in particular - so they may come up with additional/alternative ideas.
The team also does not enhance user stories or over-engineer on their own authority. That's wasteful!
I hope you share my opinion ;-)
A good training exercise and a fun team building exercise is to do the XP Planning game.
The premise is that the product owner gives requirements for something visual (like a coffee machine, a robot) and all requirements must be drawable. The developers have to draw the requirements.
There are several short iterations (the whole exercise takes between an hour and 90 minutes depending on setup time) and it's interesting to see how communication improves and trade-offs happen as the game progresses. I've ran this myself during project kickoffs and when converting teams to agile practices and the team has always found it useful and fun.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
If everything in scrum is all about functional things that a user can see is there really any place for refactoring code unrelated to any new functional requirements?
I don't think that this has as much to do with Scrum as it does with project management philosophy.
Regardless of whether a project uses Scrum or not, many project managers do not like developers spending time on "unnecessary" things like code refactoring or restructuring that doesn't directly advance one of the outstanding functional requirements. It's not "work that yields results" like normal development, it's "work that prevents a delay of results later". Given the typically short time-lines used for Sprints, the benefit is often hard to see and nearly impossible to quantify.
Keeping code maintainable needs to be an item on your burn-down list (if you use a Scrum). It is just as important as new development. While it may not seem like something that is "visible to the user", ignoring it increases your technical debt. Down the road when the technical debt piles up enough that your code's lack of maintainability slows down development, the delays in new feature development will be visible to customers.
It's all a matter of management/philosophy. Instead of looking at refactoring and maintainability enhancements as "extra" work that doesn't impact customers, it should be viewed as a time investment to prevent customer-visible delays (and potentially bugs as well) down the road. Developers can sometimes see these benefits more clearly than managers can; if your manager doesn't understand the disadvantages of neglecting maintainability, you might want to grab several other developers and have a chat with your manager.
I think there is a fair case to make for technical debt refactoring where the effort/cost impact of maintaining the code is as high as, or higher even, than the cost of refactoring it to improve quality or work better / properly - specifically to lend it a higher degree of maintainability.
eg: if the software is so problematic you are losing customers, or money, you'd act fast to fix it.. Some might argue this is a business requirement of it's own, but it's often not placed front and centre on small to mid sized development projects, which instead focus on the technicalities of creating apps rather than the impact of the quality of the app on the bottom line.
I think you are probably talking about large scale refactoring rather than the continuous refactoring you would do whilst in the whole red-green-refactor cycle.
My approach would be something like this, if reafactoring an old feature makes it easier to add a new feature then go ahead and do it. But in some ways you are right, if there is no pressure on a particular unit to change (i.e. it is completely finished and will never change again and will never impact on other modules) then there is no practical need to refactor. However I rarely find a module that is quite so finalised.
If everything in Scrum is all about functional things that a user can see (...)
Any project and methodology should be about generating business value, you rarely do things just for the fun in a business environment. Having that said, I see quality in Scrum (and other Agile methods) as a way to not kill your velocity on the long run and, ultimately to achieve hyper productivity. I thus believe that a typical "Definition of Done" should include something like "no increase of technical debt" (put your quality standards in there). If you think a new feature will impact existing code that should be refactored, include this cost in the estimate (or create a refactoring item in your Product Backlog) and explain things to your Product Owner. Because at the end, it's up to the Product Owner to prioritize items and to decide if quality can be sacrificed temporarily (if your business die because you don't release a feature, what is the point of refactoring existing code?). But he must be aware that this can't be a long term strategy or he will kill the team velocity.
bta: Regardless of whether a project uses Scrum or not, many project managers do not like developers spending time on "unnecessary" things like code refactoring or restructuring that doesn't directly advance one of the outstanding functional requirements.
Definitely a noteworthy observation; my solution to this would be as follows:
Perform regular code reviews. Every code review should recommend actions to improve on deficiencies in the code.
There is now a requirement for jobs which improve code quality. Build these into the sprint and track them in the same way as any other job.
If your manager needs any more convincing, cast 'the maintainer' as a user, and describe some user stories for them - and then 'features' are things like 'the code is fully commented with xml doc comments' and 'the code does not produce any warnings from ReSharper'
If you can justify it as part of the process of completing other tasks by identifying issues/risks with current sets of code, and it is a better end result, go for it. But don't get overzealous and screw the timelines/budget.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
My organization has been experimenting with the introduction of more "Agile" methods. We've been trying the Scrum approach for a short while, and most of the team has, more or less, adapted to it. I like it as a whole, but I'm concerned about one potentially severe impact of the methodology: as teams are consistently focused on features and backlog items, and testers are more integrated with the overall development process, it seems like skill sets are becoming blurred, and people are sensing less respect for their individual abilities.
Some of our developers are excellent at server-side technologies and optimization of heavy-weight data provisioning. Others have invested a large amount of their careers learning GUI technologies and have developed a fundamental understanding of users and usability in an application. Neither skill set is better than the other, but they are certainly different.
Is this an inevitable result of the Scrum process? Since everyone on the team (as I understand it) contributes to satisfying the next feature/requirement, backlog item, or testing goal at hand, the underlying philosophy seems to be "anyone can do it." This is, in my experience, simply not true. Most engineers (developers, testers, etc.) have a particular skill set they have honed over the years, and the Scrum methodology, in my mind, tends to devalue those very abilities they were previously respected for.
Here's an example for clarification:
If a sudden change of technology occurs on the server-side data provisioning, and every item on the to-do list for the sprint is based on this new change, the GUI developers (who likely haven't had time to become acclimated with the new technology) might not be able to contribute to the sprint. At the very least, they will need to invest time to get ramped up, and then their code will be suspect because of their lack of experience.
I understand the need for rapid development to discourage "role silos" but doesn't this discount one fundamental reality: people develop skills in accordance to necessity, their interests, or their experiences. People seem to be less motivated when they perceive their position is one of "plug-ability" (e.g. we can "plug" anyone in to do this particular task). How does Scrum address this? If it doesn't, has anyone addressed this when adopting the Scrum methodology?
The short answer is an emphatic NO! Scrum does not blur or depreciate the skills required for specialization. Scrum does not promote generalization.
The long answer is that in Scrum, the most important thing is to get the work "Done". The team, as a team (as opposed to a collection of individual "stars") collaborate, as needed, in order to get the job done. Whatever it takes - however they want (Scrum is about self managing, self motivating teams, right?).
What this means is that a scrum team may be composed of several specialists, who primarily do what they specialized in (DBA, Graphic Design, even technical writers). The team, as a whole, should have all of the skills required to fulfill the requirements. This is not the same as saying that each team member has to have all of the skills aforementioned.
That being said, it is often desired - often by the members themselves - that members other than the specialists be at least adequate in skills different from their specialty. Another poster already mentioned Scott Ambler's "General Specialist". This helps the team when there's too much work of one kind, when the specialist is absent, and it helps the member when he really would like to gain experience outside his specialty.
Given that the team is self organizing, if for some reason a specialist finds himself in the middle of the sprint, without any work to do in his specialty, the best way to deal with it, is to simply ask the specialist what he wants to do. Let the team decide. The specialist can decide to help in his other areas of adequacy, do a POC for the next sprint, "shore-up" the defenses by fixing some long forgotten technical debt, or shine the shoes of the members who are working.
Yup. I don't know if this is the long answer. But it definitely was a long answer.
:-)
The point of Scrum is for the developers to self-organize. We use scrum where I am, and jobs get passively sorted by a person's focus. We don't do it on purpose with a chart and list, it just happens. We all know who's best at what, or what their main/secondary focuses are. If the 'main' person needs help, they get the person/people with a secondary focus in it to help. We do get plenty of tasks not necessarily in line with whatever our particular focus is, but you always know who to ask for help then.
For your example - I don't know that if you say had 3 server guys and 5 gui guys, that you'd expect to get all the work done in that sprint (if the server guys + some help from the others wasn't enough). The way the sprint is supposed to work is that from a prioritized list, the developers pick what they think they can get done in that 30-day timeframe. If that meant the GUI guys needed 2 days of server-side training in order to help, that's what it'd mean. Unless there were concurrent things also high up the list that they could do instead. The sprint tasks are not supposed to be dictated by management as a psuedo-deadline.
If you have a Safari account, there's an interesting mostly case-study book by one of the guy/s who invented scrum.
I've been working as a ScrumMaster for about 18 months and have worked with two different teams. I initially expected to experience the potential issues you raise but this has not been the case. What I generally observe is that the team evolves into a mixture of specialists and generalists as people find the appropriate role for themselves - one that they can enjoy and be successful at. This is self-organisation at work. I have never had a case where our specialists were sitting idle.
If this did occur, I would expect it to be raised as an issue in Sprint Retrospective and the team would discuss how to improve the situation. The most obvious (and brutal) conclusion would be to change the team composition.
I am not sure why skill set will get blurred. There is a fair amount of confusion in the agile world. Scrum is a project management process and not a software development process and should not be seen as one. The engineers have to follow their own methodologies like TDD or extreme programming to add their own part to being agile.
Nothing goes away in scrum.
PM's still document as they go
Architects still architect their components. The only thing is they just delay some major decisions to more responsible point in time.
Developers should still follow best practices such as SOLID principles to enable for refactoring in a consistent manner as features change.
I think Scott Ambler addresses this issue very thoroughly in http://www.agilemodeling.com/essays/generalizingSpecialists.htm...
His concept of a Generalizing Specialist is exactly the thing Collective Ownership / Scrum Team calls for, and makes total sense to me.
Its hard to achieve in real life though ;-)
If you find for any reason ('sudden change of technology' or not) that the amount of work required for a system over a sprint is greater than the amount available then there's a problem with your scheduling.
One fix is that, as you suggest, you take programmers from other areas and throw them onto the mix. How well this works depends on the skills of that person and how different the problem domain is, but treating programmers as generic units that can be farmed out as needed is generally not a successful strategy for developing software.
This is still a scheduling problem though.
The best thing about Scrum is exactly the fact that skills do get a bit blurred! The point is to avoid silos at all costs by spreading specialist knowledge across the team and letting people work a bit outside their comfort zone.
Obviously this is not for everybody. Some developers are happy in their own narrow specialist field and such people are more of a hindrance in a Scrum process than an asset, whereas well-rounded and multi-talented people who are determined to get the job done, usually adapt very very well to it and are far more productive.
One of the key benefits of Scrum is to get the whole team actually involved and invested into the project instead of tackling their own special tasks and then riding off to the horizon. I'd claim that for most people, this is a far more rewarding way of working than the conveyor belt -approach of waterfall processes.
So I'd advise to boldly embrace the mixing of skills and having people come together to take down nasty problems instead of relying on specialist silos. The result of teams consisting of motivated people can be surprising.
Sounds like this would lead to more well-rounded developers, and also allow those who are experts in certain areas to continue to contribute their expertise.
I haven't used Scrum much myself (yet), but from your description, these types of teams would lead to a team/organization that is also more well-rounded as a whole - and shouldn't that be the goal of any team?
Handling sudden changes is part of Agile and this may mean that some people have to go off and learn new skills. Course this is more within the general Agile philosophy than anything Scrum-specific. There may be some extreme cases where the customer or business decides to change the world by bringing in something new and thus has to handle the subsequent pain of those people ramping up but if this is what they want and the developers are overruled, then there are only a couple of choices: (Take your lumps and try to handle the major changes) or (quit and get out of there).
While there can be some cases where someone that has specialized in something may be able to do things faster, this doesn't necessarily mean much if that is just one person on the team that is an expert and there is enough work in that area for 10 people for the whole sprint. Should those not an expert simply not do that work and let that one person attempt to get through as much as he or she can? I don't think so but there should be something to be said for those that aren't the best at something still trying to get done what they can get done.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Yesterday I had a team leader of another team say that they took a while to figure out something I wrote on a wiki page because I referred to obtaining code from source control as "checking out" which apparently confused them. They said that they were use to Clear Case and had only heard of the term "joining a project" and said that they haven't really programmed much for a long time.
While this is fine, what it then made me think of is the different types of team leaders I've had over the years. I've had some that have been almost purely managerial and I've had those that are programmers that do managerial things at the same time.
Do people have a preference as to what kind of team leader they have? How do you care if your team lead is active in the development of your product? I find team leaders who actually sit and code like the rest of the team more likely to understand things like (from my experience):
things aren't always as simple as they sound. Team leaders I've had who don't code or rarely code at all believe everything is a piece of cake and shouldn't take much time at all (which perhaps might be the case if you want to hack it together)
they are more understanding that developers don't always like sitting in long meetings and do their best to avoid getting their team into as many pointless meetings as possible
they understand what you say from a technical point of view. Those that might not have coded for a while might not be up to speed with a lot of the new technologies, techniques or lingo
I find it much more satisfying to have a team leader who has the mind of a developer and likes to get their hands dirty in the code as well. Perhaps there are some people out there that like team leads who distance themselves from the actual coding side of things and simply doles out the work, or perhaps another type of team leader that I haven't mentioned?
A team leader has to be a coder -- they can't lead the team unless the team respects them and where they're taking everyone.
A team manager, on the other hand, can either be a coder or someone who is just well organised and knows when to ask questions and interface to other management.
It is possible to find both a manager and a leader in the same person, but more often the roles (should be) separate and distinct.
You should read the book Managing Humans. I am of the opinion managers should keep their hands out of the code. They have more important responsibilities like keeping people away from developers, so they can do their job. Having them jump into development creates confusion as they aren't in it enough to know what's going on and have their time divided between that and other things, so it is difficult to count on them for major pieces of functionality. Plus, it really sucks when you have to tell your manager that something they just wrote needs to be changed, and you have to go back and redo it. Managers are really their to jump on the grenades for the rest of the team, so they can focus on accomplishing the task at hand.
That being said, should manager's know about software engineering? Yes of course they should, that's the field they are in. Should be know how to code in the latest and greatest whiz bang technology? That shouldn't really matter as long as they get how software development works.
I have no preference, I can't, I have to work with all of them, even though too many cooks spoil the broth. On a multi-developer typical project I have a technical lead, project manager and a non-technical customer. Of course, divisional and programme management will each stick their head in.
There are a number of types of leader, each have their own traits:
Non-technical customer: "The customer is always right." Often wants a moon-on-a-stick. Will call both the management and the technical bods and take the best answer as gospel.
Team manager/line manager: Somewhat pastoral role. Not particularly interested in the project I'm working on right now. Steps in when there is a decision to be made between project priorities. Probably really wants to be a coder, and delegates all the rest of his work that he can to his subordinates.
Project manager: Varying degrees of technical know-how. Is concerned only with timescales and costs. Does not understand, "I don't know how long its going to take, I need to play with it for a couple of days first to get a feel."
Team leader/technical lead: Just another developer, but with more experience. Responsible for technical decision making that will affect the whole project. Often fighting with the project manager to carry out good engineering practice, even though it will take longer in the short term.
Team leader/glorified secretary: Someone who is supposed to lead the team, but acts as more of a secretary. (Usually a grade above the team). Answers the phones, insulates customers from the technical bods. This works fine until they ask a technical question, where the glorified secretary tries to blag his/her way out of it, and eventually they work around the secretary and talk direct to the team.
We typically have a PM (non technical) who manages the project from an admin. viewpoint and a Tech Lead who manages the technical aspects and provides technical leadership to the team.
The Tech Lead will code parts of the project and will probably be the main (only) developer for the "Proof of Concept" stage.
On some smaller projects, they are the same person but it's a rare combination.
The absolute worst Software Leads/Chief Software Engineers that I've worked with were the ones that wanted to be intimately involved in the technical details. Too many important tasks were either missed or just not done. Managing a team is a full-time job. If the lead wants to get involved in the technical aspects it will certainly come at the expense of the managerial aspects.
I’ve only had 2 Software Leads/Chief Software Engineers out of dozens that I thought were worthwhile. While both were previously software engineers, those days were long gone for both of them. They knew it. They didn’t even try to pretend. Their job was now to manage. Their job was to make sure the developers had every chance to succeed. They did their best to remove all obstacles and make sure everyone was making progress.
I have a theory, but have never seen it in action, that the best software lead would be someone who is not, nor ever has been a software developer. They specialize in the true spirit of management, specifically that of being a facilitator. Unfortunately, most managers are more politically motivated or are just in the job because they've reached their pinnacle technically.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
How do you go about the requirements gathering phase? Does anyone have a good set of guidelines or tips to follow? What are some good questions to ask the stakeholders?
I am currently working on a new project and there are a lot of unknowns. I am in the process of coming up with a list of questions to ask the stakeholders. However I cant help but to feel that I am missing something or forgetting to ask a critical question.
You're almost certainly missing something. A lot of things, probably. Don't worry, it's ok. Even if you remembered everything and covered all the bases stakeholders aren't going to be able to give you very good, clear requirements without any point of reference. The best way to do this sort of thing is to get what you can from them now, then take that and give them something to react to. It can be a paper prototype, a mockup, version 0.1 of the software, whatever. Then they can start telling you what they really want.
See obligatory comic below...
In general, I try and get a feel for the business model my customer/client is trying to emulate with the application they want built. Are we building a glorified forms processor? Are we retrieving data from multiple sources in a single application to save time? Are we performing some kind of integration?
Once the general businesss model is established, I then move to the "must" and "must nots" for the application to dictate what data I can retrieve, who can perform what functions, etc.
Usually if you can get the customer to explain their model or workflow, you can move from there and find additional key questions.
The one question I always make sure to ask in some form or another is "What is the trickiest/most annoying thing you have to do when doing X. Typically the answer to that reveals the craziest business/data rule you'll have to implement.
Hope this helps!
Steve Yegge talks fun but there is money to be made in working out what other people's requirements are so i'd take his article with a pinch of salt.
Requirements gathering is incredibly tough because of the manner in which communication works. Its a four step process that is lossy in each step.
I have an idea in my head
I transform this into words and pictures
You interpret the pictures and words
You paint an image in your own mind of what my original idea was like
And humans fail miserably at this with worrying frequency through their adorable imperfections.
Agile does right in promoting iterative development. Getting early versions out to the client is important in identifying what features are most important (what ships in 0.1 - 0.5 ish), helps to keep you both on the right track in terms of how the application will work and quickly identifies the hidden features that you will miss.
The two main problem scenarios are the two ends of the scales:
Not having a freaking clue about what you are doing - get some domain experts
Having too many requirements - feature pit. - Question, cull (prioritise ;) ) features and use iterative development
Yegge does well in pointing out that domain experts are essential to produce good requirements because they know the business and have worked in it. They can help identify the core desire of the client and will help explain how their staff will use the system and what is important to the staff.
Alternatives and additions include trying to do the job yourself to get into the mindset or having a client staff member occasionally on-site, although the latter is unlikely to happen.
The feature pit is the other side, mostly full of failed government IT projects. Too much, too soon, not enough thought or application of realism (but what do you expect they have only about four years to make themselves feel important?). The aim here is to work out what the customer really wants.
As long as you work on getting the core components correct, efficient and bug-free clients usually remain tolerant of missing features that arrive in later shipments, as long as they eventually arrive. This is where iterative development really helps.
Remember to separate the client's ideas of what the program will be like and what they want the program to achieve.
Some clients can create confusion by communicating their requirements in the form of application features which may be poorly thought out or made redundant by much simpler functionality then they think they require. While I'm not advocating calling the client an idiot or not listening to them I feel that it is worth forever asking why they want a particular feature to get to its underlying purpose.
Remember that in either scenario it is of imperative importantance to root out the quickest path to fulfilling the customers core need and put you in a scenario where you are both profiting from the relationship.
Wow, where to start?
First, there is a set of knowledge someone should have to do analysis on some projects, but it really depends on what you are building for who. In other words, it makes a big difference if you are modifying an enterprise application for a Fortune 100 corporation, building an iPhone app, or adding functionality to a personal webpage.
Second, there are different kinds of requirements.
Objectives: What does the user want to accomplish?
Functional: What does the user need to do in order to reach their objective? (think steps to reach the objective/s)
Non-functional: What are the constraints your program needs to perform within? (think 10 vs 10k simultaneous users, growth, back-up, etc.)
Business rules: What dynamic constraints do you have to meet? (think calculations, definitions, legal concerns, etc.)
Third, the way to gather requirements most effectively, and then get feedback on them (which you will do, right?) is to use models. User cases and user stories are a model of what the user needs to do. Process models are another version of what needs to happen. System diagrams are just another model of how different parts of the program(s) interact. Good data modeling will define business concepts and show you the inputs, outputs, and changes that happen within your program. Models (and there are more than I listed) are really the key to the concern you list. A few good models will capture the needs and from models you can determine your requirements.
Fourth, get feedback. I know I mentioned this already, but you will not get everything right the first time, so get responses to what your customer wants.
As much as I appreciate requirements, and the models that drive them, users typically do not understand the ramifications of of all their requests. Constant communication with chances for review and feedback will give users a better understanding of what you are delivering. Further, they will refine their understanding based on what they see. Unless you're working for the government, iterations and / or prototypes are helpful.
First of all gather the requirements before you start coding. You can begin the design while you are gathering them depending on your project life cicle but you shouldn't ever start coding without them.
Requirements are a set of well written documents that protect both the client and yourself. Never forget that. If no requirement is present then it was not paid for (and thus it requires a formal change request), if it's present then it must be implemented and must work correctly.
Requirements must be testable. If a requirement cannot be tested then it isn't a requirement. That means something like, "The system "
Requirements must be concrete. That means stating "The system user interface shall be easy to use" is not a correct requirment.
In order to actually "gather" the requirements you need to first make sure you understand the businness model. The client will tell you what they want with its own words, it is your job to understand it and interpret it in the right context.
Make meetings with the client while you're developing the requirements. Describe them to the client with your own words and make sure you and the client have the same concept in the requirements.
Requirements require concise, testable example, but keep track of every other thing that comes up in the meetings, diagrams, doubts and try to mantain a record of every meeting.
If you can use an incremental life cycle, that will give you the ability to improve some bad gathered requirements.
You can never ask too many or "stupid" questions. The more questions you ask, the more answers you receive.
According to Steve Yegge that's the wrong question to ask. If you're gathering requirement it's already too late, your project is doomed.
High-level discussions about purpose, scope, limitations of operating environment, size, etc
Audition a single paragraph description of the system, hammer it out
Mock up UI
Formalize known requirements
Now iterate between 3 and 4 with more and more functional prototypes and more specs with more details. Write tests as you go. Do this until you have functional software and a complete, objective, testable requirements spec.
That's the dream. The reality is usually after a couple iterations everybody goes head-down and codes until there's a month left to test.
Gathering Business Requirements Are Bullshit - Steve Yegge
read the agile manifesto - working software is the only measurement for the success of a software project
get familiar with agile software practices - study Scrum , lean programming , xp etc - this will save you tremendous amount of time not only for the requirements gathering but also for the entire software development lifecycle
keep regular discussions with Customers and especially the future users and key-users
make sure you talk to the Persons understanding the problem domain - e.g. specialists in the field
Take small notes during the talks
After each CONVERSATION write an official requirement list and present it for approving. Later on it would be difficult to argue against all agreed documentation
make sure your Customers know approximately what are the approximate expenses in time and money for implementing "nice to have" requirements
make sure you label the requirements as "must have" , "should have" and "nice to have" from the very beginning, ensure Customers understand the differences between those types also
integrate all documents into the latest and final requirements analysis (or the current one for the iteration or whatever agile programming cycle you are using ... )
remember that requirements do change over the software life cycle , so gathering is one thing but managing and implementing another
KISS - keep it as simple as possible
study also the environment where the future system will reside - there are more and more technological restraints from legacy or surrounding systems , since the companies do not prefer to throw to the garbage the money they have invested for decades even if in our modern minds 20 years old code is garbage ...
Like most stages of the software development process its iteration works best.
First find out who your users are -- the XYZ dept,
Then find out where they fit into the organisation -- part of Z division,
Then find out what they do in general terms -- manage cash
Then in specific terms -- collect cash from tills, and check for till fraud.
Then you can start talking to them.
Ask what problem they want you want to solve -- you will get an answer like write a bamboozling system using OCR with shark technoligies.
Ignore that answer and ask some more questions to find out what the real problem is -- they cant read the till slips to reconcile the cash.
Agree a real solution with the users -- get a better ink ribbon supplier - or connect the electronic tills to the network and upload the logs to a central server.
Then agree in detail how they will measure the success of the project.
Then and only then propose and agree a detailed set of requirements.
I would suggest you to read Roger-Pressman's Software Engineering: A Practitioner's Approach
Before you go talking to the stakeholders/users/anyone be sure you will be able to put down the gathered information in a usefull and days-lasting way.
Use a sound-recorder if it is OK with the other person and the information is bulky.
If you heard something important and you need some reasonable time to write it down, you have two choices: ask the other person to wait a second, or say goodbye to that precious information. You wont remember it right, ask any neuro-scientist.
If you detect that a point need deeper review or that you need some document you just heard of, make sure you make a commitment with the other person to send that document or schedule another meeting with a more specific purpose. Never say "I'll remember to ask for that xls file" because in most cases you wont.
Not to long after the meeting, summarize all your notes, recordings and fresh thoughts. Just summarize it rigth. Create effective reminders for the commitments.
Again, just after the meeting, is the perfect time to understand why the gathering you just did was not as right as you thought at the end of the meeting. That's when you will be able to put down a lot of meaningful questions for another meeting.
I know the question was in the perspective of the pre-meeting, but please be aware that you can work on this matters before the meeting and end up with a much usefull, complete and quality gathering.
I've been using mind mapping (like a work breakdown structure) to help gather requirements and define the unknowns (the #1 project killer). Start at a high level and work your way down. You need to work with the sponsors, users and development team to ensure you get all the angles and don't miss anything. You can't be expected to know the entire scope of what they want without their involvement...you - as a project manager/BA - need to get them involved (most important part of the job).
There are some great ideas here already. Here are some requirements gathering principles that I always like to keep in mind:
Know the difference between the user and the customer.
The business owners that approve the shiny project are usually the customers. However, a devastating mistake is the tendency to confuse them as the user. The customer is usually the person that recognizes the need for your product, but the user is the person that will actually be using the solution (and will most likely complain later about a requirement your product did not meet).
Go to more than one person
Because we’re all human, and we tend to not remember every excruciating detail. You increase your likelihood of finding missed requirements as you talk to more people and cross-check.
Avoid specials
When a user asks for something very specific, be wary. Always question the biases and see if this will really make your product better.
Prototype
Don’t wait till launch to show what you have to the user. Do frequent prototypes (you can even call them beta versions) and get constant feedback throughout the development process. You’ll probably find more requirements as you do this.
I recently started using the concepts, standards and templates defined by the International Institute of Business Analysts organization (IIBA).
They have a pretty good BOK (Book of Knowledge) that can be downloaded from their website. They do also have a certificate.
Requirements Engineering is a bit of an art, there are lots of different ways to go about it, you really have to tailor it to your project and the stakeholders involved. A good place to start is with Requirements Engineering by Karl Wiegers:
http://www.amazon.com/Software-Requirements-Second-Pro-Best-Practices/dp/0735618798/ref=pd_bbs_sr_2?ie=UTF8&s=books&qid=1234910330&sr=8-2
and a requirements engineering process which may consist of a number of steps e.g.:
Elicitation - for the basis for discussion with the business
Analysis and Description - a technical description for the purpose of the developers
Elaboration, Clarification, Verification and Negotiation - further refinement of the requirements
Also, there are a number of ways of documenting the requirements (Use Cases, Prototypes, Specifications, Modelling Languages). Each have their advantages and disadvantages. For example prototypes are very good for elicitation of ideas from the business and discussion of ideas.
I generally find that writing a set of use cases and including wireframe prototypes works well to identify an initial set of requirements. From that point it's a continual process of working with technical people and business people to further clarify and elaborate on the requirements. Keeping track of what was initially agreed and tracking additional requirements are essential to avoid scope creep. Negotiation plays a bit part here also between the various parties as per the Broken Iron Triangle (http://www.ambysoft.com/essays/brokenTriangle.html).
IMO the most important first step is to set up a dictornary of domain-specific words. When your client says "order", what does he mean? Something he receives from his customers or something he sends to his suppliers? Or maybe both?
Find the keywords in the stakeholders' business, and let them explain those words until you comprehend their meaning in the process. Without that, you will have a hard time trying to understand the requirements.
i wrote a blog article about the approach i use:
http://pm4web.blogspot.com/2008/10/needs-analysis-for-business-websites.html
basically: questions to ask your client before building their website.
i should add this questionnaire sheet is only geared towards basic website builds - like a business web presence. totally different story if you are talking about web-based software. although some of it is still relavant (e.g. questions relating to look and feel).
LM
I prefer to keep my requirements gathering process as simple, direct and thorough as possible. You can download a sample document that I use as a template for my projects at this blog posting: http://allthingscs.blogspot.com/2011/03/documenting-software-architectural.html