Who modifies affected components in an agile environment? [closed] - build-process

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
In a continuous integration, agile environment, if I make a change in class A (e.g. change attribute names) which I have created and have been working on, that affects class B, which "belongs" to someone else, who modifies class B whenever I want to check in my change? Me or the class' B owner?
I suppose is more agile if I modify it, so that I don't have to notify other people, but at the same time, people working on it are more aware of the impact of modifying it...

In an agile environment, class B (like all classes) belongs to the team. We call this Shared Code Ownership. You should check in working code; if that means you need to adjust class B to conform to the changes you make to class A - adjust! Better yet, pair.

"Individuals and interactions over process and tools." Communicate the change upfront with the other people impacted. Unless the code is trivial, you may not understand the full impact of the change. Even if you do, you owe it to your other teammates to keep them informed.
"Don't break the build." Checking code in that you know will break the build is not a good idea. Once you have communicated with the others that are impacted, work with them to get the code changes completed. Attempt to get the code changes checked in so at least the nightly build is not broken.
Just my opinion....
Bob

who modifies class B whenever I want to check in my change? Me or the class' B owner?
With no disrespect, I think your question is so basic that it clearly suggests that you do not have even basic understanding of what being Agile means. Well, maybe that's why you asked this question.
Here are my suggestions:
In this kind of situation you really should walk up to the other developer who might be impacted by your change and have a quick face to face conversation about this, this quick conversation may lead to you guys pair programming to make sure the build does not break, and no one gets affected.
Please read all the Agile Principles again, and write down what you understand from each one of it. Implement those principles in your day to day development life. This is the only way to become Agile. There is no certification or book to refer to, to make someone Agile magically. Being Agile has to be self realized, hence practice them daily till they become a habit.
So the "Information" is conveyed using the most effective method i.e. f2f conversation. The problem is solved on the basis of the collective responsibility principle, most ideal way to fix it is pair programming.
Reference:
Agile principle
"The most efficient and effective method of
conveying information to and within a development
team is face-to-face conversation."
Also a general Agile Guideline from the manifesto:
"Responding to change over following a plan"

Agile includes team code ownership, communication
As #Carl Manaster said, the code belongs to the team. And as #rcravens suggests, agile is about communication. Have a quick meeting with the author of B and let them know your proposed changed to be sure you understand your impact. If it's complex, pair with B's author on the change. When the change is complete, if you think it might affect other developers on the team, call a brief team meeting and let them know of the change.
By the way, how's your design?
Your question may aslo be revealing a design issue - A and B might be too tightly coupled. After your tests work and you've implemented the change, I suggest that you examine your code and see if something needs refactoring. (Remember, TDD is Red/Green/Refactor) In particular, if changing class A means you have to change class B, then you might not be following the Single Responsibility Principle (SRP) arm of SOLID practice.

Related

Team activity/game for illustrating design in a SCRUM environment [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I'm looking for a team building / training activity for some of my scrum teams. I want something that really illustrates the flexibility that the team has when implementing stories to define the scope and complexity of the feature themselves. Most of the teams have long-term waterfall experience and are used to having a well-defined specification. I'm looking for something that illustrates the need for the team to vary the scope of what they are building themselves, dependent on the time and resources available.
I couldn't find anything at tastycupcakes.com and Google wasn't much help. Maybe someone has prepared something themselves they would care to share?
Edit (in response to request for example in comments)
Suppose the team has committed to building a story for displaying data to a user in a paged list for analysis purposes. The acceptance criteria can be fulfilled easily but a differnet implementation might provide added functionality e.g. wrapping a third party control which has built-in sorting and grouping functionality.
The point is, because the scrum time window is absolutely fixed the scope of the implementation may be pushed if the team feels they are ahead of schedule, especially if some technical designs proved less problematic than thought. Conversely, if some tasks have taken longer than anticipated, the team can short-cut the user story while still making sure what they delivers satisfies the acceptance criteria.
The thing I am trying to get away from is the current mindset that the feature has a specification set in stone, and that's what will be built, whatever the circumstances.
I don't think it is up to the team to define the scope and complexity of a story. It is the PO's job to define the conditions of acceptance and then it is the team's job to estimate size based on the PO's description. If the stories are right sized, the conditions are usually pretty tightly defined. This could be why you aren't seeing much out there ....
EDIT:
I don't think your example changes my answer. If the PO wanted this "additional functionality" such as sorting etc, they would have defined it in the story or in another story. To build something that isn't asked for is waste. Spending time on a story that is low priority in the backlog is inefficient. Agile is based on building what is needed and only what is needed in order of importance. So I would frown on developers adding "extra goodies" just because they are working on a particular screen.
That does not mean you shouldn't look over all the stories in the backlog and make architectural plans based on what will be needed in the future.
I think I get what you're looking for, but feel free to clarify if I'm mistaken. I'm under the impression you're looking for an exercise that will show the flexibility in implementation details the team has when using user stories.
If so, try an exercise like this.
Split the team into two groups and have the same Product Owner between them (or you can have one Product Owner for each group if both PO's know the exercise).
The PO presents a fictional story like, "As an executive at BigSales Co, I want to be able to see, at a glance, which salespeople are performing and which are not, so that I can pair performers with under-performers to improve the overall team performance."
A story like the one above is light on implementation details, but has a very clear business problem to be solved (as user stories should). Using a story like this, give the teams 30 minutes to work on a paper prototype that would satisfy the user story. They can interact as much as they want with the PO during this time frame. The person playing the PO should be careful not to give them implementation details, but leave it to the team to decide, while expressing and clarifying the business need.
At the end of the 30 minutes, have each team present their solution and explain how it satisfies the user story.
The important thing here, is that once both teams have presented, it is likely that both presentations will be quite different and yet both valid. This shows the level of flexibility the team has to provide what they feel is the best solution without having to be told explicitly what to do.
Hope this helps.
In order to estimate the story cost the team will be expecting to work with the PO to define, in at least broad terms, the requirements for that feature. In the example you gave the team may explicitly ask the PO if the sorting & grouping functionality is needed. If they say no, as the PO can't see a use for it at that stage, then the estimate is given on that basis and the implementation done according to that. No consideration is given to these additional features on the YAGNI principle. If the requirement for the sorting & grouping comes up subsequently as a result of people using early incarnations of the product, well, that's another story, and is estimated & scheduled into the backlog accordingly. The scope of the implementation of a story isn't changed just because you've got some time left in an iteration - instead you simply pull the next prioritised item from the backlog and get on with that.
Of course, when implementing the story the team are at liberty to use the most time/cost effective method that they consider suitable for the evolving product. If this means using an component with additional capability i.e. a superset of the features then they could do so (unless this is in breach of non-functional requirements), as long as the acceptance criteria are passed, but they shouldn't go deliberately adding in unrequested functionality just because they've got some time spare in an iteration.
My opinion is somewhere between your description of adapting the features to the time, which is left, and the "just fulfill the acceptance criteria and that's it" POV of the two other commentators...
In my point of view, you all should recall the formal setup of an user story:
As a -role- I want -feature-, so -aim-.
Given the purpose of a desired feature, the developer can better understand, what the PO really wants. He then can come up with additional ideas and ask the PO, e.g.:
Hey PO, if you want -aim- so why don't we do -alternative/addition to feature-. Wouldn't that be even better?
And the PO may agree and the story is implemented as described, but in another interpretation, or the story maybe adapted. The points that is important to me:
The PO describes the purpose, he would like to have fulfilled, and a feature that is appropriate to do so
The team does not just implement the acceptance criterias like development zombies, but they are open minded and are tuned to the PO's vision in general and the single story purpose in particular - so they may come up with additional/alternative ideas.
The team also does not enhance user stories or over-engineer on their own authority. That's wasteful!
I hope you share my opinion ;-)
A good training exercise and a fun team building exercise is to do the XP Planning game.
The premise is that the product owner gives requirements for something visual (like a coffee machine, a robot) and all requirements must be drawable. The developers have to draw the requirements.
There are several short iterations (the whole exercise takes between an hour and 90 minutes depending on setup time) and it's interesting to see how communication improves and trade-offs happen as the game progresses. I've ran this myself during project kickoffs and when converting teams to agile practices and the team has always found it useful and fun.

scrum and refactoring [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
If everything in scrum is all about functional things that a user can see is there really any place for refactoring code unrelated to any new functional requirements?
I don't think that this has as much to do with Scrum as it does with project management philosophy.
Regardless of whether a project uses Scrum or not, many project managers do not like developers spending time on "unnecessary" things like code refactoring or restructuring that doesn't directly advance one of the outstanding functional requirements. It's not "work that yields results" like normal development, it's "work that prevents a delay of results later". Given the typically short time-lines used for Sprints, the benefit is often hard to see and nearly impossible to quantify.
Keeping code maintainable needs to be an item on your burn-down list (if you use a Scrum). It is just as important as new development. While it may not seem like something that is "visible to the user", ignoring it increases your technical debt. Down the road when the technical debt piles up enough that your code's lack of maintainability slows down development, the delays in new feature development will be visible to customers.
It's all a matter of management/philosophy. Instead of looking at refactoring and maintainability enhancements as "extra" work that doesn't impact customers, it should be viewed as a time investment to prevent customer-visible delays (and potentially bugs as well) down the road. Developers can sometimes see these benefits more clearly than managers can; if your manager doesn't understand the disadvantages of neglecting maintainability, you might want to grab several other developers and have a chat with your manager.
I think there is a fair case to make for technical debt refactoring where the effort/cost impact of maintaining the code is as high as, or higher even, than the cost of refactoring it to improve quality or work better / properly - specifically to lend it a higher degree of maintainability.
eg: if the software is so problematic you are losing customers, or money, you'd act fast to fix it.. Some might argue this is a business requirement of it's own, but it's often not placed front and centre on small to mid sized development projects, which instead focus on the technicalities of creating apps rather than the impact of the quality of the app on the bottom line.
I think you are probably talking about large scale refactoring rather than the continuous refactoring you would do whilst in the whole red-green-refactor cycle.
My approach would be something like this, if reafactoring an old feature makes it easier to add a new feature then go ahead and do it. But in some ways you are right, if there is no pressure on a particular unit to change (i.e. it is completely finished and will never change again and will never impact on other modules) then there is no practical need to refactor. However I rarely find a module that is quite so finalised.
If everything in Scrum is all about functional things that a user can see (...)
Any project and methodology should be about generating business value, you rarely do things just for the fun in a business environment. Having that said, I see quality in Scrum (and other Agile methods) as a way to not kill your velocity on the long run and, ultimately to achieve hyper productivity. I thus believe that a typical "Definition of Done" should include something like "no increase of technical debt" (put your quality standards in there). If you think a new feature will impact existing code that should be refactored, include this cost in the estimate (or create a refactoring item in your Product Backlog) and explain things to your Product Owner. Because at the end, it's up to the Product Owner to prioritize items and to decide if quality can be sacrificed temporarily (if your business die because you don't release a feature, what is the point of refactoring existing code?). But he must be aware that this can't be a long term strategy or he will kill the team velocity.
bta: Regardless of whether a project uses Scrum or not, many project managers do not like developers spending time on "unnecessary" things like code refactoring or restructuring that doesn't directly advance one of the outstanding functional requirements.
Definitely a noteworthy observation; my solution to this would be as follows:
Perform regular code reviews. Every code review should recommend actions to improve on deficiencies in the code.
There is now a requirement for jobs which improve code quality. Build these into the sprint and track them in the same way as any other job.
If your manager needs any more convincing, cast 'the maintainer' as a user, and describe some user stories for them - and then 'features' are things like 'the code is fully commented with xml doc comments' and 'the code does not produce any warnings from ReSharper'
If you can justify it as part of the process of completing other tasks by identifying issues/risks with current sets of code, and it is a better end result, go for it. But don't get overzealous and screw the timelines/budget.

What measurement do you use in your development process to determine *Doneness* of your software? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I just finished listening to a very eye-opening podcast on Hanselminutes about the definition of "Done". So my question to everyone is "When do you consider a piece of software to be "Done"? Is it when it's fully unit tested? Is it when it's completely documented? What measurement do you use in your development process to determine Doneness of your software?
When the check clears?
Seriously, every time you write a piece of software, you should have defined what "done" means. First. If you have a customer, then there should be a contract -- specific, measurable, agreed, and testable -- that defines done.
If you don't know where you're going, how will you know when you get there?
Surely dependent on context and purpose of the software?
Lunar Lander (the real thing) would have a very different definition of Done to Lunar Lander the Flash game.
Where I work, DONE is defined by a committee of non-technical managers. You can imagine the fun and games.
Test, unit test, integration test, webtest, peer QA and end user review in the sprint review. Peer QA decides if anything else is necessary, all tests must pass in CI environment. This is in a scrum web-project.
When they client(1) considers it done, it's checked in, backed up, and documented.
Also: "done" rarely exists in web dev.
(1) where client may be an internal PM or such
A good measurement is code churn. Using your source code control software, measure the rate of change. How many lines of code are being removed/added/changed per day. Graph this over time. As you approach being ready to release, this should trend downwards and give an indication of stability and readiness to ship. This assumes that you are actually testing well and making changes to fix bugs or respond to change requests. If your user acceptance test users and integration/unit test activity are continuing to regress and test and you aren't having to make code changes (because they aren't finding anything necessitating a change) then you are probably ready to ship.
If big chunks of code are churning a few days before an arbitrary or externally driven ship date, look out!
When the software can be used to satisfy the requirements that define the system.
But I've always thought, "software is never done, it just reaches an acceptable level of incompleteness."
From a development viewpoint 'done' is described quite well by my friend and mentor Simon Baker, here
Alistair Cockburn, Jeff Patton and Mike Cohn also have the following collected views
Shippable quality, which has to be exercised in a go-live, forces teams to really focus on ensuring that incremental work is more carefully thought through.
'Done' is something which all the above quoted would be the first to agree is always different per team and project; however to satisfy knowing that a given piece of work is done, the team must conduct an exercise at the start to flesh out the measure of done-ness and list those criteria.
In so doing, everyone has agreed by consensus what an acceptable completion point is - whether that includes noting the Task in Excel, or writing documentation (or not) becomes an implementation detail for that team/project. The overriding thing is that everyone's understanding of Done is uniform.
Equally, assuming you reach that definition by consensus, it can also be changed as required by consensus.
When all of the requirements are met and all the tests pass.
It's never done, simply versioned and released.
Each project will have it's own definition of done, ours is code complete (compiles successfully, etc), unit tested (or some kind of local testing if not possible) and released within one of our packages (so it's available to the other teams).
But the MOST important thing in DoD is every parties should agree on what it is (team, product owner, manager, etc) and it should be some kind of public contract, published in a team portal is a good idea.
Any piece of software at any time is always 80% done. At least, that's what my experience teaches ...
When the customer thinks it is.

Design or prototype first? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
When first approaching a project is best to step back and think through everything or just dive in and start coding and polish at a later date? Essentially, do you design first or try to rapidly prototype?
I have been burned by both methods, sometimes I try and think everything through but when I actually get down to the nitty gritty I encounter problems that I didn't take in consideration, and sometimes when I code first I end with code that needs to redone to fit in with a better overall design. Alot of my problems stem from inexperience, but any advice is welcome.
Go incrementally and iteratively.
Design a bit, implement a bit.
Starting with a design you can suffer from a tunnel effect where you cannot have any real feedback before you actually implement something.
Starting without design, you can take decisions you'll regret.
The ideal situation is to be able to implement a very skeletal end-to-end version of your system that can be tested, and demonstrated to the customer.
It is always safer to design first, but this does not mean prototyping does not work. The real problem with prototyping is resisting the urge to keep the code you already wrote instead of throwing it away when the time comes to do the design.
There is no silver bullet. It seems like design first is the preferred approach. But you will not be able to predict all complications that can arise while implementing your design. Some of them could potentially be show stoppers. Plus, if you're writing for a client, it's good to be able to show something just to make sure that you're on the same page.
At my workplace we do both - we do a rapid prototype, just to get feedback and get an idea of any potential problems. Then we do a formal design and formal implementation. In most cases we are able to salvage a lot of code from the prototyping stage. I like this approach, since we usually end up with clean, maintainable code.
See Gall's Law. The key is to iterate: design a little, implement a little, test a little, then repeat until you (or your customers) are satisfied. This is the essence of the new breed of "agile" methodologies.
It depends.
Prototyping is most useful when the requirements or a solution aren't necessarily clear. As an example, I am doing a data warehousing project in an environment (large commercial insurance) where financial reconciliation is a big deal. This project has involved a large prototyping exercise to get a system that will reconcile to the financials. As the business rules surrounding this were not well documented, the prototype was instrumental in exposing all of the corner cases.
In other cases, a design-first approach might be more appropriate. This is most applicable where requirements and a sensible solution architecture are reasonably obvious.
You must have some idea of a cohesive architecture before you start working. This is especially true of large scale systems.
Prototyping could be used for particular aspects of the design, e.g. presentation layer.
I think it depends on what kind of business requirements you have up-front. If they are (relatively) detailed and complete, then I'd design based on those requirements. If you have barely anything to work with in the beginning, then prototype out and show your customer what you got, to receive further requirement info.
You should develop using Agile Methodologies. Simply put, you design has you go. The team together with the product owner define a list of topics to develop, order them by importance, and split the development in iterations. Each iteration as features to be developed and on the start of the iteration is design each feature.
See more here.
When first approaching a project, prototype. But don't prototype everything. Prototype one important thing (one "use case" if that means anything) and "turn the inner eye to follow its path" - keep an eye out for the practical problems you encounter in trying to get that one thing done.
Now that you have some idea what it takes to do an important thing, you can design from more than just first principles.
Of course, this assumes you're working in an environment where you can turn out prototypes at minimal cost to ongoing development efforts. But if you're working in such an environment, pepper your design discussions liberally with prototypes. With any luck you may get to keep some of them.
note that agile methods are not an excuse to avoid designing, they just encourage testing of the design more frequently, and in smaller increments
i like to sketch the design and break its elements down until reasonably sure that there are no obvious unknowns or risks; unknowns and risks are highlighted for 'spike' projects, with a time-box for determining feasibility and notes on possible alternatives if the preferred methods prove unworkable
once comfortable with the overall architecture, jump into the features bottom-up (or in priority order) to complete the design, write the initial tests, then implement
EDIT: note that the question "design or prototype first" is making a bad assumption, i.e. that it is possible to prototype without doing any design, which of course is not the case (unless you are using the million-monkeys methodology)
Design first, unless you're willing to take the risk of throwing out all the work put into your prototype when you find it can't do what you need it to do. At a minimum, you should make some high level designs for your project that can help you make some decisions about how you're going to build your prototype so that you will have a minimum of wasted effort.
If I know what I want to build, I just go right to design.
If I'm building something for a client, then I prototype to ease out more specific requirements from the users.
Maybe not an answer but a suggestion from my experience.
In most cases I'd be better off if I had started coding earlier. You can design until the cow comes home, but if the cows are on the horizon when you start coding, you might find your careful design hard to implement in time.

Scrum - How to get better input from the functional/commercial team [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We are a small team of 3 developers (2 experienced but new to this particular business sector) developing a functionally complex product. We're using Scrum and have a demo at the end of each sprint. Its clear that the functional team have plenty of ideas but these are not well communicated to the development team and the demo poses more questions than answers.
Have you any recommendations for improving the the quality of input from the functional people?
Further info: I think part of the problem is that there are no specs or User Stories as such. Personally I think they need to be writing down some sort of requirements - what sort of things should they be writing down and to what complexity given its an agile process?
Have you tried working with your customer to define / formulate acceptance tests?
Using something like Fit to come up with these tests - would result in better specs as well as force the customer to think about what is really required. The icing on the cake is instant-doc-executable specs at the end of this process.
That is of course, if your customers are available and open to this approach. Give it a try!
If not (and that seems to be the majority - because it is less work) - calendar flash 'em - schedule meetings/telecons every week until they sing like canaries :) +1 to Dana
Sometimes the easiest way to get input from people is to force it out of them. My company used SCRUM on a project, and found very quickly that people tend to keep to themselves when they already know what they're doing. We ended up organizing weekly meetings where team members were required to display something that was learned during the week. It was forced, but it worked pretty well.
I'm a big believer in Use Cases, detailing the system behaviour in response to user actions. Collectively these can form a loose set of requirements, and in a SCRUM environment can help you prioritise the Use Cases which will form that particular sprint's implemented features.
For example, after talking to your functional team you identify 15 separate Use Cases. You prioritise the Use Cases, and decided to plan for 5 sprints. And the end of each sprint you go through and demo the product fulfilling the Use Cases implemented during the sprint, noting the feedback and amending the Use Cases.
I understand that the people you call functional people are acting as Product Owners, right?
I think part of the problem is that there are no specs or User Stories as such. Personally I think they need to be writing down some sort of requirements - what sort of things should they be writing down and to what complexity given its an agile process?
Actually, without having any specs you probably have no acceptance test for the backlog itens as well. You should ask the PO to write the user stories, I like the "As a - type of user -, I want -some goal- so that -some reason-." form. Keep in mind that the User Stories shall be INVEST - Independent, Negotiable, Valuable to users or customers, Estimable, Small and Testable. What is a must is to have the Acceptance tests written together with the story so that the team should know what the story must be able to do in order do be set as done.
Remember that as the product evolves, it's expected to the PO have ideas as he sees the working product. It's not a bad thing, actually it is one of the best thing you can get through Agile. What you have to pay attention is that this ideas mus be included in the product backlog and it needs to be prioritized by th PO. And, if it's necessary and will add value to the customer, the idea should be planned to be built in the next sprint.
Someone from the functional team should be part of the team and available to answer your questions about the features you're adding.
How can you estimate the Backlog item if they are not detailled enough ?
You could establissh a rule that Backlog item that do not have clear acceptance criteria cannot be planned.
If would be better to have someone from the functional team acting as Product Owner, to determine, choose and priotitize the Backlog items, and/or as Domain Expert.
Also, make sure everyone in both the functional team and the development team speaks the same language, so as to avoid misunderstandings ; See ubiquitous language.
Track the time most waiting for answers from the functional team as well as he time wasted developping unnecessary features or reworking existing features so that they fits the bill.
Are they participating in the stand-up meetings?
You could propose to have a representative at each (or some) of them, to ask them for input before the end of the sprint
Are you doing stand-up meetings and do you have burn down chart? I think those two areas would benefit you greatly.
I recommend the book "Practices of an agile developer" it is full of suggestions how to make a scrum team successful. It also gives good tips how to get the product owner/customer more involved and how to get the whole process rolling. It's worth the money IMHO.
I agree that you need some sort of requirements (user stories or else).
One piece of advice I can give is to use some sort of visual aids with the functional teams. When customers have plenty of ideas (as you've said) they usually also have a visual idea of what a feature looks like, when the developed product doesn't fit this visual idea it creates a lot of doubts, even if it does the job functionally.
When discussing functionality with customers, I try to be very visual. Drawing sketches on a board, or even verbally describing what something would look like. Trying to find a common visual image. You can then take a photo of the sketches and use them as part of the documentation.
Another advice is to keep your sprints as short as possible, so that you do more frequent demos. But you may already be doing this, since you didn't mention your current sprint duration.

Resources