Sprint estimation handling after QA increase - scrum

Long story short: we had a team of 3 devs and 1 QA working in a stable rhythm of a two-week sprint of 50 story points. We discussed increasing the team by one dev and 1 QA with the PO. The QA was added to the team, but the developer will not be added anymore due to various reasons.
Now, of course, the PO is asking if we can increase the number of story points, considering the team has increased by 1 QA. This, of course, is an odd situation as most of the tasks we estimate require development, and since the development capacity is the same, we cannot estimate more. But from his side, he probably also cannot 'accept' that team has increased, but estimations are the same. So what are the common solutions to this?
The way I see it, a QA can handle some non-dev tickets like documentation, research, etc. So one way to satisfy both parties is for the next sprints to have one or two additional tickets (besides the 30 estimated ones) with the condition that those 1-2 are tickets that only require QA work. Any other suggestions?

I think I would maybe pull in at most one more small story for the first sprint they participate in. It might be better to not pull any in, and just let them pair with someone for a sprint or two.
Whether you pull in a new story or not, be sure to talk about this at the next retro. Did the new person speed the team up or slow the team down? Remember that the team velocity is a team velocity, so it's up to the team to decide if they are ready to pull in more stories in the next sprint.
I recommend you ramp up slowly after that first sprint. If you find room at the end of the sprint, that's a signal that you might be able to pull in an additional story at the next sprint. You won't have to guess if you can pull in more stories, the statistics from the sprint will tell you.

Related

Using OptaPlanner to create school time tables with some tricky constraints

I'm going to use OptaPlanner to lay out time tables for a school.
We're laying out the time tables for a full semester and every week could, if necessary, be slightly different.
There are some tricky constraints to take into account:
1. Weekly schedules
The lectures in one subject should be spread out somewhat evenly over the semester.
We can't for example put 20 math lectures the first week and "be done" with math for this semester.
In fact, it's nice to have some weekly predictibility
"Science year 2 have biology on Tuesday mornings"
This constraint must not be carved in stone however. Some weeks have to include work experience sessions, PE excursions, etc, in which case they must deviate from other weeks.
Problem
If I create a constraint that say, gives -1soft for not scheduling a subject the same time as the previous week, then OptaPlanner will waste a lot of time before it "accidentally" finds a good placement for a lecture, and even if it manages to converge so that each subject is scheduled the same time every week, it will never ever manage to move the entire series of lectures by moving them one by one. (That local optimum will never be escaped.)
2. Cross student group subjects
There's a large correlation between student groups and courses; For example, all students in Science year 2 mostly reads the same courses: Chemistry for Science year 2, Biology for Sience year 2, ...
The exception being language courses.
Each student can choose to study French, German or Spanish. So Spanish for year 2 is studied by a cross section of Science year 2 students, and Social Studies year 2 students, etc.
From the experience of previous (manual) scheduling, the optimal solution it's almost guaranteed to schedule all language classes in the same time slots. (If French is scheduled at 9 on Thursdays, then German and Spanish can be scheduled "for free" at 9 on Thursdays.)
Problem
There are many time slots in one semester, and the chances that OptaPlanner will discover a solution where all language lectures are scheduled at the same time by randomly moving individual lectures is small.
Also, similarly to problem 1: If OptaPlanner does manage to schedule French, German and Spanish at the same time, these "blocks" will never be moved elsewhere, since they are individual lectures, and the chances that all lectures will "randomly" move to the same new slot is tiny. Even with a large Tabu history length and so on.
My thoughts so far
As for problem 1 ("Weekly predictability") I'm thinking of doing the following:
In the construction phase for the full-semester-schedule I create a reduced version of the problem, that schedules (a reduced set of lectures) into a single "template week". Let's call it a "single-week-pre-scheduling". This template week is then repeated in the construction of the initial solution of the full semester which is the "real" planning entity.
The local search steps will then only focus on inserting PE excursions etc, and adjusting the schedule for the affected weeks.
As for problem 2 I'm thinking that the solution to problem 1 might solve this. In a 1 week schedule, it seems reasonable to assume that OptaPlaner will realize that language classes should be scheduled at the same time.
Regarding the local optimum settled by the single-week-pre-scheduling ("Biology is scheduled on Tuesday mornings"), I imagine that I could create a custom move operation that "bundles" these lectures into a single move. I have no idea how simple this is. I would really like to keep the code as simple as possible.
Questions
Are my thoughts reasonable? Is there a more clever way to approach these problems? If I have to create custom moves anyways, perhaps I don't need to construct a template-week?
Is there a way to assign hints or weights to moves? If so, I could perhaps generate moves with slightly larger weight that adjusts scheduling to adhere to predictable weeks and language scheduled in the same time slots.
A question well asked!
With regards to your first problem, I suggest you take a look at OptaWeb Employee Rostering and the concept of rotations. A rotation is "how things generally are" and then Planner has the freedom to diverge from the rotation at a penalty. Once you understand the concept of the rotation from the UI, take a look at the planning entity Shift and how the rotation is implemented with the use of employee and rotationEmployee variables. Note that only the employee is an actual #PlanningVariable, with the rotationEmployee being fixed.
That means that you have to define your rotations manually, therefore doing the work of the solver yourself. However, since this operation is only done once a semester I assume, maybe the solution could be to have a simpler solver generate a reasonable general rotation first, and then a second solver would take it and figure out the specific necessary adjustments?
With regards to your second problem, rotations could help there too. But I'm thinking maybe some move filtering and custom moves to help OptaPlanner to either move all language classes, or none? Writing efficient custom moves is not easy, and filtering stock moves is cumbersome. So I would only do it when the potential of other options is exhausted. If you end up doing this, look for MoveIteratorFactory.
My answer is a little vague, as we do not get into the specifics of the domain model, but for the purposes of designing the overall solution, it hopefully gives enough clues.

Reassign user story during sprint?

If a story is in progress and then swim lanes are code review and QA-ready, how should the assignment of stories work? Should a story remain assigned to the developer? And should the code review and QA tasks be created as sub-tasks in it? Or should the story be re-assigned when it is moved to code review by the developer, and when code review is done, it is moved to QA lane by the reviewer and re-assigned to QA by the reviewer. It seems anti-pattern to re-assign tickets from in-progress to future states. It looks okay to re-assign tickets before it was brought in the sprint but not after.
Scrum does not have anything to say about how the work is done nor how a board is managed. However, many team's look at Kanban's "pull" approaches to answer this. In that case, work is never assigned or given, it is only claimed/taken on. Therefor, work would be moved to "Code Review" by the reviewer when they began the work. Similarly, the work would be moved to QA by the tester when they started. "Ready" columns are a bit of a misnomer as they are not states. Rather, they are statuses of the previous state. If your order is Code Review - QA Ready - QA, then in fact, QA ready is a possible designation on work in Code Review. This may seem minor, but it is very important to prevent pile-ups in your process where work stalls without owners.
There is no single answer, but one way of doing it is to think of of a User Story as a container of tasks where each task is a small technical deliverable of any kind. With this mindset you can effectivly stop thinking of who the assignee is as each developer will have its small contribution towards the goal.
One of the problems with task re-assignment is that at one point you can loose traceability of who has done what and productivity on per developer basis. So in this sense having each teammember doing its own tasks and delivering towards the completion of a user story can solve this.
Then you can assign the User Story to the product owner, or you can assign it to a developer that kind of holds ownership towards its delivery to test when the tester will take over. But the user story when assigned to a developer does not mean that he owns the User Story, it just means that it is his responsibility to ensure hand over to test nothing more nothing less.
When a tester encounters a bug then you create a bug attached to the User story.
Not recommended. It's feasible tho. You have to assess your current work situation. If the user story is something that can make a whole difference, then it would be better to just stop the sprint, reassess your situation and make the necessary changes - then continue. Either way, when you are adding a new user story to the backlog, deadlines can be hardly met.
We are using a little bit different approach. Like we have following columns on Jira Board.
To-do
In_progress
Ready for Review
Ready for QA
In-Testing
Rework/Rejected
Done
A developer pick a task from to-do and assign it to him self and keep it in-progress. Once he is done he moved it to Ready for Review and keep it un assign. Someone will pick it and assign it to himself and review it. After reviewing that person will move the case to ready for QA without assigning it to anyone. Whoever is free or plaining to work on case will assign that case to himself and when he starts working on the case, he will move it to in-testing. As a result of testing the case can go in rework/rejected or in Done. If it moved to Rework/Rejected he will assign it to original person who initially worked on it. And that person when rework on it, will move the case to in-progress again.

JIRA Process for Splitting User Stories into Smaller User Stories [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
In Scrum, there is the process of backlog grooming, which, in-part, deals with the breakdown of epics/big user stories into smaller stories that are easier to estimate and consume within an individual sprint.
In JIRA Agile, I am looking for the proper way to mirror the grooming process and still have a manageable list of backlog items and an accurate tracking of estimation of the stories.
Here's an example of the problem I see:
An epic is created as an individual ticket. (Ticket total: 1)
That epic is broken-down into (let's say) 3 user stories: (Ticket total: 4)
We try to estimate the first user story and realized it's too big, so we break down this user story into 2 smaller user stories (Ticket total: 6)
I now have a backlog of 6 tickets that are to be prioritized and managed, when in reality I should only need to be prioritizing the smallest user stories against each other. Also, I may have put a large estimate on a large user story, and then refined the estimates through estimating the sub user stories. (i.e. Step 3 might have a large user story estimate of 20 points, but the sub stories might total to 13 + 5 = 18)
Am I splitting stories the correct way as it was intended in JIRA?
Should I remove the estimates of the larger user stories once I've sub-divided the story, and only focus on the estimates for the smallest stories to prevent double-counting?
How can I manage user-stories that have ultimately become epics in themselves (and still associate them to the higher-level epic)?
(I've been using the Structure plugin, but it doesn't help me with managing backlog prioritization in the agile board.)
The concept of an epic is that it is a very large user story that requires breaking down.
In Scrum the backlog is a process of just in time refinement. We start with coarse grained ideas and as we get closer to starting work on something we get it in to better and better shape.
It would be quite possible to do all of this with stories. However, many teams find that it is useful to have backlog items that are less defined than a story. Hence the use of epics.
As an example, a team I worked with a few years ago was asked to work on a medical evidence product. They realised that the required work fell into 4 broad areas (i.e. really big stories). There was a clear prioritisation which meant they needed to work on one of these areas straight away and the others could follow on later.
The team decided to break the first area of work down in to normal sized user stories immediately. They added these items to the backlog.
They also wanted to track the other 3 areas of work and so they added epics to represent them.
The team started work on the first area and was making good progress. It was clear that they would be able to start on the next epic in a few sprints time. The Product Owner started to refine the epic by breaking it down a little. They met with the team several times and some stories were further broken down into smaller stories. By the time the team was about 1-2 weeks away from starting on the epic it was in good shape and consisted of stories that could be brought in to planning meetings.
At this point the epic was not of any real interest to the team. They had the detailed stories they needed. The team used JIRA and they did find it helpful to see the epic name on issues as it helped them to see the backlog clearly.
My suggestion to you would be:
Add epics to JIRA as place holders for future work
As you get closer to working on them, break them down in to stories
Once an epic is broken down in to stories then you no longer need to be concerned about it
If a story is broken down in to smaller stories, remove the original
You may find that you break epics down in to stories, but still need to keep the epic as there is more work to be done on it. If this happens, it suggests that your epics are too big. I would recommend that an epic should be broken down in to no more than 10 stories (even that is pretty big).
Try to avoid epics that are really categories of work and hence can keep on producing new stories. You might want to use themes or labels in JIRA for this kind of thing.
Epic should not be listed as story rather epic should act as parent to all stories. Stories themselves have only one parent that is epic and not story.

asp.net web site developer pricing

i have been approached to build some websites for a few small businesses. They want a basic out of the box database driven website with some standard stuff (users, authentication, a few dynamic pages, etc). i am going to use asp.net mvc for this.
they have asked me how much i charge for this. my question, is that i have no frame of reference here. should i charge for the project a flat fee or a per hour charge. where do i start here to help determine correct pricing for a website project.
Charge an hourly fee that is about 3x the hourly rate you would command in a full-time job. The 3x multiplier basically evens things out for the benefits, etc. that you won't get as a 1099 employee.
Whatever you do, no matter how "Standard" it sounds. Do not charge a flat fee. Under that arrangement they have no incentive to curb feature creep. Even if you agree to a really tight spec up front, it is a recipe for disaster because it forces you to renegotiate every time they want something more. Under an hourly arrangement feature-creep works to your advantage.
Also, don't discount the hourly rate if you are a novice. Just don't bill unproductive hours. It is much easier to ease into billing more hours later than renegotiating the price per hour.
Charge per hour.
-- edit
So attempt to 'quote' it by estimating the number of hours. Make sure your estimate is conservative.
A nice approach is, in your head, consider the 'min', 'max', 'standard' type of time. Then use that to estimate the real time it will take you.
If you know that they know what they want and won't change the specs on you, go for a lump sum. That way you can work quickly.
If they are prone to change their minds and don't know what they want, go for an hourly fee. That way you won't be stuck working on their project for months without additional pay when they can't decide on exactly what they want.
I admit that I don't know much about this issue. However, I would still like to warn about the whole charge-per-hour mentality. While this approach basically protects the developer, it doesn't work well with the business owner:
Charge-per-hour, to the business owner, is a liability, whereas fix-price is just a cost. That's one.
The second thing is, if you are charging per hour, how are you going to justify your "research time"? Are you going to charge that as well? But business owner doesn't like to pay for research time. Or you can stick to your old trick and do something that has been reinvented N times and charge for the amount of time you spend. But that would seem unethical to some.
I have billed both by the hour and by the project. It's been my experience that customers are happier with project based billing instead of hourly billing.
With that in mind, I always pad the project cost by an amount I feel will cover the times when the client decides to change their mind. Further, I keep the project plan pretty simple. For example, I don't write 4 pages on how the login screen will work. Instead, It's a single bullet point: "Login Page". This allows both them and I a little flexibility.
Because I keep things simple AND I allow time for flexibility AND the clients know how much it's going to cost up front, my client's are happier and I can keep better track of my income. Also, I keep in pretty close contact with them. As long as you can keep the relationship good, you'll have a long term client.
Of course, it takes a bit of self discipline in combination with experience to know how long things take to build. Along these lines I never experiment on a client's dime. When I write the proposal, I already know what I'm going to use to get the job done and I've used those tools before. Because of this I can say with confidence that a login page will take a certain amount of time to put together.
Next, don't bite off more than you can chew. If it's a big project, break it up into smaller deliverables with their own pay schedule. That way the client (or you) can decide to walk away at any point. For example, if you think the project will take 3 months, break it up into 3 pieces. Incidentally, this helps with cash flow.
Finally, don't discount your time when getting started. That scares people.
I have a flat rate I charge for sites and outline exactly what they will get and then anything beyond that outline gets charged at an hourly rate. The hard part of this is if your getting into a project where your not sure how long it will take then you might want to break down the various pieces and then add at least 10 hours to that estimate. You don't want to sell yourself short but you also don't want to overcharge the customer. Be sure your clear up front that once the site is delivered then all changes are per hour or based on a maintenance fee structure.
Good luck.

How do you estimate a ROI for clearing technical debt?

I'm currently working with a fairly old product that's been saddled with a lot of technical debt from poor programmers and poor development practices in the past. We are starting to get better and the creation of technical debt has slowed considerably.
I've identified the areas of the application that are in bad shape and I can estimate the cost of fixing those areas, but I'm having a hard time estimating the return on investment (ROI).
The code will be easier to maintain and will be easier to extend in the future but how can I go about putting a dollar figure on these?
A good place to start looks like going back into our bug tracking system and estimating costs based on bugs and features relating to these "bad" areas. But that seems time consuming and may not be the best predictor of value.
Has anyone performed such an analysis in the past and have any advice for me?
Managers care about making $ through growth (first and foremost e.g. new features which attract new customers) and (second) through optimizing the process lifecycle.
Looking at your problem, your proposal falls in the second category: this will undoubtedly fall behind goal #1 (and thus get prioritized down even if this could save money... because saving money implies spending money (most of time at least ;-)).
Now, putting a $ figure on the "bad technical debt" could be turned around into a more positive spin (assuming that the following applies in your case): " if we invest in reworking component X, we could introduce feature Y faster and thus get Z more customers ".
In other words, evaluate the cost of technical debt against cost of lost business opportunities.
Sonar has a great plugin (technical debt plugin) to analyze your sourcecode to look for just such a metric. While you may not specifically be able to use it for your build, as it is a maven tool, it should provide some good metrics.
Here is a snippet of their algorithm:
Debt(in man days) =
cost_to_fix_duplications +
cost_to_fix_violations +
cost_to_comment_public_API +
cost_to_fix_uncovered_complexity +
cost_to_bring_complexity_below_threshold
Where :
Duplications = cost_to_fix_one_block * duplicated_blocks
Violations = cost_to fix_one_violation * mandatory_violations
Comments = cost_to_comment_one_API * public_undocumented_api
Coverage = cost_to_cover_one_of_complexity *
uncovered_complexity_by_tests (80% of
coverage is the objective)
Complexity = cost_to_split_a_method *
(function_complexity_distribution >=
8) + cost_to_split_a_class *
(class_complexity_distribution >= 60)
I think you're on the right track.
I've not had to calculate this but I've had a few discussions with a friend who manages a large software development organisation with a lot of legacy code.
One of the things we've discussed is generating some rough effort metrics from analysing VCS commits and using them to divide up a rough estimate of programmer hours. This was inspired by Joel Spolsky's Evidence-based Scheduling.
Doing such data mining would allow you to also identify clustering of when code is being maintained and compare that to bug completion in the tracking system (unless you are already blessed with a tight integration between the two and accurate records).
Proper ROI needs to calculate the full Return, so some things to consider are:
- decreased cost of maintenance (obviously)
- opportunity cost to the business of downtime or missed new features that couldn't be added in time for a release
- ability to generate new product lines due to refactorings
Remember, once you have a rule for deriving data, you can have arguments about exactly how to calculate things, but at least you have some figures to seed discussion!
I can only speak to how to do this empirically in an iterative and incremental process.
You need to gather metrics to estimate your demonstrated best cost/story-point. Presumably, this represents your system just after the initial architectural churn, when most of design trial-and-error has been done but entropy has had the least time to cause decay. Find the point in the project history when velocity/team-size is the highest. Use this as your cost/point baseline (zero-debt).
Over time, as technical debt accumulates, the velocity/team-size begins to decrease. The percentage decrease of this number with respect to your baseline can be translated into "interest" being paid on each new story point. (This is really interest paid on technical and knowledge debt)
Disciplined refactoing and annealing causes the the interest on technical debt to stablize at some value higher than your baseline. Think of this as the steady-state interest the product owner pays on the technical debt in the system. (The same concept applies to knowledge debt).
Some systems reach the point where the cost + interest on each new story point exceeds the value of the feature point being developed. This is when the system is bankrupt, and it's time to rewrite the system from scratch.
I think it's possible to use regression analysis to tease apart technical debt and knowledge debt (but I haven't tried it). For example, if you assume that technical debt correlates closely with some code metrics, e.g. code duplication, you could determine the degree the interest being paid is increasing because of technical debt versus knowledge debt.
+1 for jldupont's focus on lost business opportunities.
I suggest thinking about those opportunities as perceived by management. What do they think affects revenue growth -- new features, time to market, product quality? Relating debt paydown to those drivers will help management understand the gains.
Focusing on management perceptions will help you avoid false numeration. ROI is an estimate, and it is no better than the assumptions made in its estimation. Management will suspect solely quantitative arguments because they know there's some qualitative in there somewhere. For example, over the short term the real cost of your debt paydown is the other work the programmers aren't doing, rather than the cash cost of those programmers, because I doubt you're going to hire and train new staff just for this. Are the improvements in future development time or quality more important than features these programmers would otherwise be adding?
Also, make sure you understand the horizon for which the product is managed. If management isn't thinking about two years from now, they won't care about benefits that won't appear for 18 months.
Finally, reflect on the fact that management perceptions have allowed this product to get to this state in the first place. What has changed that would make the company more attentive to technical debt? If the difference is you -- you're a better manager than your predecessors -- bear in mind that your management team isn't used to thinking about this stuff. You have to find their appetite for it, and focus on those items that will deliver results they care about. If you do that, you'll gain credibility, which you can use to get them thinking about further changes. But appreciation of the gains might be a while in growing.
Being a mostly lone or small-team developer this is out of my field, but to me a great solution to find out where time is wasted is very, very detailed timekeeping, for example with a handy task-bar tool like this one that can even filter out when you go to the loo, and can export everything to XML.
It may be cumbersome at first, and a challenge to introduce to a team, but if your team can log every fifteen minutes they spend due to a bug, mistake or misconception in the software, you accumulate a basis of impressive, real-life data on what technical debt is actually costing in wages every month.
The tool I linked to is my favourite because it is dead simple (doesn't even require a data base) and provides access to every project/item through a task bar icon. Also entering additional information on the work carried out can be done there, and timekeeping is literally activated in seconds. (I am not affiliated with the vendor.)
It might be easier to estimate the amount it has cost you in the past. Once you've done that, you should be able to come up with an estimate for the future with ranges and logic even your bosses can understand.
That being said, I don't have a lot of experience with this kind of thing, simply because I've never yet seen a manager willing to go this far in fixing up code. It has always just been something we fix up when we have to modify bad code, so refactoring is effectively a hidden cost on all modifications and bug fixes.

Resources