Dynamics of burn down charts in Scrum [closed] - scrum

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I have a question about how dynamic the Y axis of a burn down chart in Scrum should be. We plot the chart in the beginning of a sprint having the total number of estimated story points on the Y axis, and the planned days on the X axis.
Usually, during the sprint, we have a fair amount of:
unplanned tasks / stories;
tasks / stories that take longer than estimated (re-estimated by the person checking out the task);
Questions:
should the story points of the unplanned tasks be plotted into the chart? if so, extend the y axis as well and redraw the expected curve? or just plot the points and have an actual curve with points maybe higher than the starting point?
should the re estimations be counted when plotting the chart, or just the initial estimations? same questions as for the first question...
I would prefer to ignore the unplanned items and the re-estimations as they will show up in the actual focus factor calculation anyway. Is it wrong?

Try using a burn UP chart.
http://www.nearinfinity.com/blogs/lee_richardson/forget_burndown_use_burnup_charts.html
Also, I would do everything in your power to stop the unplanned items. They are typically very caustic. If it's code debt cashing in, try to address it a little bit at a time in every sprint. If it's a consistent amount of time every sprint, perhaps create a story at the start of the sprint for "unplanned tasks" or "production fixes" or something like that.
In the end, what really matters is that the burndown chart allows you to track progress (or lack thereof) toward the commitment. So as long as you're achieving that, you're good to go. Which means, really any of these solutions would work - just pick one and go with it.

We usually do option number 2 at work, adding the new story points to the actual line so that we "see" that the line goes up, reflecting new learnings and additions. But since opinions vary, I guess your team will have to agree on what suites them best, since these burndown charts are for the team to show progress throughout the sprint.

What you count or not count should depend on what you are using your burndown for.
When I use a burndown it is most often to answer the question "Are we on track to completing our commitment of this sprint - or do we need to take external action?".
In that case, the thing that is most relevant to track is the "anticipated total amount of work left to finish commitment"; whether that amount was planned/unplanned or whether the amount was originally estimated to another amount is uninteresting in this context. It is still amount of work that need to be done - so it all counts.
So, count all remaining work. If the graph points towards the goal, keep working. If it points drastically different - take external actions (e g renegotiate sprint commitment w PO).
Now, you might be trying to answer another question (e g "how good are we at planning" or "are we having scope creep during sprint"), and in that case you would count in a different way.

A burndown chart is useful for tracking progress towards the team's commitment. In this case, it sounds like your team is struggling with two things that don't relate to the burndown chart:
1. Unplanned work
2. Poor estimates.
The key here is to focus on those problems. No matter what you do with the burndown chart, if you're adding unplanned work and your estimates are poor... you'll never derive any value from the burndown chart.
I'd recommend a couple of things:
1. Switch to tracking hours for Tasks... not points. Hours are tangible for the team... they mean something. Points are typically burned down at the release level.
2. Try shortening the length of your sprints. It's easier to achieve a smaller goal.
3. Ensure that task estimates are no longer then 8 hours. In fact, I'd shorten that to probably 4 hours. Estimating tasks that take longer than a single day encourages the wrong behavior for the team.
4. Ensure that you're spending enough time in Sprint Planning that that team can make a commitment. An effective sprint planning meeeting is the first step towards an effective sprint.

Related

Flex Profiling (Flex Builder): comparing two results

I am trying to use Flex Profiler to improve the application performance (loading time, etc). I have seen the profiler results for the current desgn. I want to compare these results with a new design for the same set of data. Is there some direct way to do it? I don't know any way to save the current profiling results in history and compare it later with the results of a new design.
Otherwise I have to do it manually, write the two results in a notepad and then compare it.
Thanks in advance.
Your stated goal is to improve aspects of the application performance (loading time, etc.) I have similar issues in other languages (C#, C++, C, etc.) I suggest that you focus not so much on the timing measurements that the Flex profiler gives you, but rather use it to extract a small number of samples of the call stack while it is being slow. Don't deal in summaries, but rather examine those stack samples closely. This may bend your mind a little bit, because it will not give you particularly precise time measurements. What it will tell you is which lines of code you need to focus on to get your speedup, and it will give you a very rough idea of how much speedup you can expect. To get the exact amount of speedup, you can time it afterward. (I just use a stopwatch. If I'm getting the load time down from 2 minutes to 10 seconds, timing it is not a high-tech problem.)
(If you are wondering how/why this works, it works because the reason for the program being slower than it's going to be is that it's requesting work to be done, mostly by method calls, that you are going to avoid executing so much. For the amount of time being spent in those method calls, they are sitting exposed on the stack, where you can easily see them. For example, if there is a line of code that is costing you 60% of the time, and you take 5 stack samples, it will appear on 3 samples, plus or minus 1, roughly, regardless of whether it is executed once or a million times. So any such line that shows up on multiple stacks is a possible target for optimization, and targets for optimization will appear on multiple stack samples if you take enough.
The hard part about this is learning not to be distracted by all the profiling results that are irrelevant. Milliseconds, average or total, for methods, are irrelevant. Invocation counts are irrelevant. "Self time" is irrelevant. The call graph is irrelevant. Some packages worry about recursion - it's irrelevant. CPU-bound vs. I/O bound - irrelevant. What is relevant is the fraction of stack samples that individual lines of code appear on.)
ADDED: If you do this, you'll notice a "magnification effect". Suppose you have two independent performance problems, A and B, where A costs 50% and B costs 25%. If you fix A, total time drops by 50%, so now B takes 50% of the remaining time and is easier to find. On the other hand, if you happen to fix B first, time drops by 25%, so A is magnified to 67%. Any problem you fix makes the others appear bigger, so you can keep going until you just can't squeeze it any more.

Scrum - When do you Estimate the Effort for Product Backlog Items? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
At which part of the Scrum process does your team make educated estimates as to the effort required to complete a given product backlog item?
For example, say you have a product backlog item that says "The user will be able to supply their email address and receive an email with a link for resetting their password" scheduled for Sprint 1.
Do you start the sprint with a very rough estimate and refine it? When does this "user story" turn into granular action items that a programmer can estimate in time? (ex: "Build web form with one text box and submit button" = 2 hours)
Do you do the finer, more accurate, estimates before the sprint begins? At the beginning of the sprint? Or during the sprint whenever the designer/programmer finally bumps into the task?
Usually, estimation should be done in 2 levels at the start of each sprint: story level and task level. For best results, product owner and team should do both together, every time, although sometimes it is acceptable for the team estimate at task level without the product owner present.
Project Estimation / Roadmap Construction (Story Level)
On your first sprint, you have to estimate at least 80% of the backlog items (I'm assuming the Product Owner already had it prioritized) to build a reasonable project roadmap, which will consist of stories grouped in sprints and a initial estimated projection of the project length.
At this moment, the estimation of each story is done using, not hours, days or week, but a relative unit for size (which encompass effort, complexity and risk all at the same time), such as story points. We use the Fibonnaci scale and Planning Poker for this phase. It is important that the whole team actively participate in this process.
After that, the team has to guess how many stories they are able to complete in the 1st sprint, which will be their initial estimated velocity (points/iteration). Usually, it is best not to use 1-month sprints but rather a 2-week or 1-week sprint length to improve estimation accuracy. The 1st planning will usually take the whole day or even 2 days, depending on backlog size, team size and the length of the sprints.
After this 1st round of story estimation, the product owner together with the team might want re-prioritize the backlog to optimize the cost/benefit ratio, so the might be some back and forth until there is an agreement.
You should end up with something like this:
PROJECT ACME ROADMAP
SPRINT 1 (38 points) <= estimated velocity
--------
Story 1 (21 points)
Story 2 (13 points)
Story 3 (4 points)
SPRINT 2 (40 points)
--------
Story 4 (13 points)
Story 5 (13 points)
Story 6 (8 points)
Story 7 (5 points)
SPRINT 3 (39 points)
--------
...
On the following sprints, this roadmap will be revised over and over, at the start of each sprint, adjusting the velocity to the actual velocity that the team is obtaining and re-calculating the project length as needed. Sometimes, re-estimating the stories is necessary as well, as the team evolves and requirements change. However, the time to revise the roadmap should be no more than half a day.
The progress at this level should be visible to stakeholders using a burndown chart, where the X axis are the sprints and the Y axis are the story points.
Sprint Estimation (Task Level)
The 2nd part of the planning phase for each sprint is spent breaking down each story into tasks. Here, tasks should be highly technical in nature and estimated using hours. We have a policy that if the task is estimated longer than 8 hours, then it needs to be broken down into more detailed tasks no matter what. The result will be the sprint backlog, with tasks grouped by story, and the sprint burndown chart, where X/Y axis should be days of the sprint and hours respectively.
It should look like this:
Sprint 8
--------
Story 17
Task 1: 8 hours
Task 2: 6 hours
Task 3: 2 hours
Story 18
Task 1: 8 hours
Task 2: 6 hours
Story 19
Task 1: 6 hours
Task 2: 3 hours
...
So basically, these are the 2 types of estimation you should be doing at the start of each sprint, where usually the 1st sprint requires a little more effort to build the initial project roadmap.
The rough estimate should be done before the sprint planning where this particular item is supposedly picked by a team. Usually we check the product backlog out during context switches or downtime during a sprint to do rough estimates on new items in "story points", so the product owner can prioritize them properly before the next sprint planning.
Our sprint planning is usually time-boxed to 2 hours in the beginning of a new sprint. This is when we meet with the product owner(s) and pick items from the backlog, most of them roughly estimated and correctly prioritized. Missing estimates are done on the spot and then we do the "fine-grained" tasking of the stories within this time-window (which is usually quite intense work) leveraging the fact that the rest of the company is aware of this and POs and stakeholders are available to sort out unaccounted for details.
Of course, sometimes the implementation task sequence will differ from the planned tasking, then it has to be adjusted, and the burndown chart might need to be retuned.
Burndown in tasks
We simply use number of tasks for our burndown measure. Usually you do something like actual hours or ideal hours, but this was good enough for us and apparently interesting enough to need some clarification.
We do not estimate time on these tasks, all that matters is the story point estimate (the rough estimate) on the story, which we put in ideal man days.
How that story is split into tasks is more of a team distribution and general progress indicator thing and not so much making accurate hour-estimates for us.
In the end we have handled x amount of story points and get our focus factor from that in relation to the actual available man days in the team that sprint.
In the end the rough estimate in story points is what we base our story selection on (ie, how many sp we can do in a sprint). We tend to get better at the rough estimate - and I think this is important because the product owner prioritizes items in the backlog mostly based on this, and never based on task estimates anyway - because that's team internal.
As the tasks have no explicit time estimate on themselves, the focus is on the rough estimate in story points. If the tasks together takes more time than estimated story points * focus factor, we simply did the rough estimate wrong or should have corrected it during sprint planning when most information should have been available or sorted out.
At the moment we produce detailed task breakdown and estimates before the sprint starts. This is for 2 reasons:
1) Our business want the estimates to help them decide the priority.
2) The development teams are under a pressure to deliver to the estimates and to not deviate from the natural burn down.
In my opinion this is the wrong approach as it removes the ability to be agile. I think the process should be more like this...
1) The business should use the fibonacci numbers produced during the planning meeting to help them determine priority. Or at least only expect a 'finger in the air' estimate from the dev team.
2) The burn down chart should be seen as a guide to how the project is progressing to indicate whether more PBI's need to be added or lower priority ones dropped and not as a firm 'target' of what will be completed.
Working this way would allow us to spend much less time in planning and design. We would still produce a high level estimate at the start of the sprint which could be refined as the sprint goes on.
I will be interested to get comments on this before I have the battle with my business.

Scrum: Unfinished products and sprint velocity [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Let’s say product X is worth 10 story points. Development starts in sprint Y, but is not completed in time. What do you with the story points when calculating sprint Y’s velocity?
Would you:
a. Allocate 0 story points for sprint Y and 10 points for the sprint it is eventually completed in;
b. Determine the story points for the remaining work (let’s say 3) and allocate the difference to sprint Y (7 in our example); or
c. Something else?
Thanks in advance!
Depends on whether you care about your "instantaneous" or "average" velocity. Personally, I wouldn't make it more complicated than necessary and just add it into the sprint where it was completed. Calculate your average velocity by looking at the average number of points completed per sprint over the last 3, 6, and 12 months. Hopefully, these will eventually converge and you'll have a good idea of how much you can get done in one sprint.
Allocate 0 points for sprint Y and 10 points when the story is eventually completed. Either the story is done or it is not done. There is no middle ground. You want to avoid the 50% done or your teams may implement many stories half way and none completely.
It is perfectly okay not to finish a story during a sprint and completing it in the next sprint. But, you should not present this story to the product owner during the sprint review.
If you have enough stories for a given sprint, it won't matter if the story is completed this sprint or the next. Things will average.
It is also important to explain to the team and to the stakeholders that the velocity helps estimate when the release will take place and is not a measure of the team performance.
The team should be judged on the final result they produce, not when those results are produced.
Combined with a well prioritized backlog, you will create good quality software that means your customers needs.
That's one of the ideas of the sprint, the "completeness" is binary, either done or not, over time the team(s) will have better estimation and this question will loose relevance
BUT...
The next question is how do you caculate your commitment for sprint after Y. If your past weather shows you have a an average velocity of 20pts. If you carry the story over then you carry over 10pts. However if you think there is only 3pts left of the story: Do you
A) Take on another 17pt to fill your estimated capacity of 20pts
B) Only take on 10pt more as the story carried over was originally estimated at 10pts
We got into a mess trying to do A. What do other people think ?
[Update]
I posted a question about this:
Work out sprint capacity when carrying over story points in scrum
The situation here is not satisfactory, but at the moment we estimate the work remaining for unfinished stories. If it is only around 20% or less we leave the story and the points in the sprint they are. If more than that we ask the PO if we should finish the story, if yes then we move it to the new sprint.
However this is not satisfactory for several reasons.
First big or risky stories should have been started at the beginning of the sprint so the non-completion could be avoided.
Second we get inaccurate (but probably smoother) velocity estimates which are less useful going forward
Third it isn't strict, and the team is like a 2 year old child, show it a slight weakness and it wants to exploit it.
Finally, strictness is being tightened as time progresses, the teams are finding their feet to an extent and learning the best ways of dealing with stuff. We already have massive variation in velocity - most teams have a comment on each and every sprint about what factors (holiday, illness etc) affected each sprint... totally bad :(

How many game updates per second? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
What update rate should I run my fixed-rate game logic at?
I've used 60 updates per second in the past, but that's hard because it's not an even number of updates per second (16.666666). My current games uses 100, but that seems like overkill for most things.
None of the above. For the smoothest gameplay possible, your game should be time-based, not frame-locked. Frame-locking works for simple games where you can tweak the logic and lock down the framerate. It doesn't do so well with modern 3D titles where the framerate jumps all over the board and the screen may not be VSynced.
All you need to do is figure out how fast an object should be going (i.e. virtual units per second), compute the amount of time since the last frame, scale the number of virtual units to match the amount of time that has passed, then add those values to your object's position. Voila! Time-based movement.
I used to maintain a Quake3 mod and this was a constant source of user-questions.
Q3 uses 20 'ticks per second' by default - the graphics subsystem interpolates so you get smooth motion on the screen. I initially thought this was way low, but it turns out to be fine, and there really aren't many games at all with faster action than q3
I'd personally go with the "good enough for john carmack, good enough for me"
I like 50 for fixed rate pc games. I can't really tell the difference between 50 and 60 (and if you are making a game that can/cares you should probably be at 100).
you'll notice the question is 'fixed-rate game logic' and not 'draw loop'. For clarity, the code will look something like:
while(1)
{
while(CurrentTime() < lastUpdate + TICK_LENGTH)
{
UpdateGame();
lastUpdate += TICK_LENGTH;
}
Draw();
}
The question is what should TICK_LENGTH be?
Bear in mind that unless your code is measured down to the cycle, not each game loop will take the same number of milliseconds to complete - so 16.6666 being irrational is not an issue really as you will need to time and compensate anyway. Besides it's not 16.6666 updates per second, but the average number of milliseconds your game loop should be targeting.
Such variables are generally best found via the guess and check strategy.
Implement your game logic in such a way that is refresh agnostic (Say for instance, exposing the ms/update as a variable, and using it in any calculations), then play around with the refresh until it works, and then keep it there.
As a short term solution, if you want an even update rate but don't care about the evenness of the updates per second, 15ms is close to 60 updates/sec. While if you are about both, your closest options is 20ms or 50 updates/sec is probably the closest you are going to get.
In either case, I would simply treat time as a double (Or a long with high-resolution), and provide the rate to your game as a variable, rather then hard coding them.
The ideal is to run at the same refresh-rate as the monitor. That way your visuals and the game updates don't go in and out of phase with each other. The fact that each frame doesn't last an integral number of milliseconds shouldn't matter to you; why is that a problem?
I usually use 30 or 33. It's often enough for the user to feel the flow and rare enough not to hog the CPU too much.
Normally I don't limit the FPS of the game, instead I change all my logic to take the time elapsed from last frame as input.
As far as fixed-rate goes, unless you need a high rate for any reason, you should use something like 25/30. That should be enough rate, and will be making your game a little lighter on CPU usage.
Your engine should both "tick" (update) and draw at 60fps with vertical sync (vsync). This refresh rate is enough to provide:
low input lag for a feeling of responsiveness,
and smooth motion even when the player and scene are moving rapidly.
Both the game physics and the renderer should be able to drop frames if they need to, but optimize your game to run as close to this 60hz standard as possible. Also, some subsystems like AI can tick closer to 10-20fps, and make sure your physics are interpolated on a frame-to-frame time delta, like this: http://gafferongames.com/game-physics/fix-your-timestep/

Fixed vs. variable frame rates in games: what is best, and when?

After working for a while developing games, I've been exposed to both variable frame rates (where you work out how much time has passed since the last tick and update actor movement accordingly) and fixed frame rates (where you work out how much time has passed and choose either to tick a fixed amount of time or sleep until the next window comes).
Which method works best for specific situations? Please consider:
Catering to different system specifications;
Ease of development/maintenance;
Ease of porting;
Final performance.
I lean towards a variable framerate model, but internally some systems are ticked on a fixed timestep. This is quite easy to do by using a time accumulator. Physics is one system which is best run on a fixed timestep, and ticked multiple times per frame if necessary to avoid a loss in stability and keep the simulation smooth.
A bit of code to demonstrate the use of an accumulator:
const float STEP = 60.f / 1000.f;
float accumulator = 0.f;
void Update(float delta)
{
accumulator += delta;
while(accumulator > STEP)
{
Simulate(STEP);
accumulator -= STEP;
}
}
This is not perfect by any means but presents the basic idea - there are many ways to improve on this model. Obviously there are issues to be sorted out when the input framerate is obscenely slow. However, the big advantage is that no matter how fast or slow the delta is, the simulation is moving at a smooth rate in "player time" - which is where any problems will be perceived by the user.
Generally I don't get into the graphics & audio side of things, but I don't think they are affected as much as Physics, input and network code.
It seems that most 3D developers prefer variable FPS: the Quake, Doom and Unreal engines both scale up and down based on system performance.
At the very least you have to compensate for too fast frame rates (unlike 80's games running in the 90's, way too fast)
Your main loop should be parameterized by the timestep anyhow, and as long as it's not too long, a decent integrator like RK4 should handle the physics smoothly Some types of animation (keyframed sprites) could be a pain to parameterize. Network code will need to be smart as well, to avoid players with faster machines from shooting too many bullets for example, but this kind of throttling will need to be done for latency compensation anyhow (the animation parameterization would help hide network lag too)
The timing code will need to be modified for each platform, but it's a small localized change (though some systems make extremely accurate timing difficult, Windows, Mac, Linux seem ok)
Variable frame rates allow for maximum performance. Fixed frame rates allow for consistent performance but will never reach max on all systems (that's seems to be a show stopper for any serious game)
If you are writing a networked 3D game where performance matters I'd have to say, bite the bullet and implement variable frame rates.
If it's a 2D puzzle game you probably can get away with a fixed frame rate, maybe slightly parameterized for super slow computers and next years models.
One option that I, as a user, would like to see more often is dynamically changing the level of detail (in the broad sense, not just the technical sense) when framerates vary outside of a certian envelope. If you are rendering at 5FPS, then turn off bump-mapping. If you are rendering at 90FPS, increase the bells and whistles a bit, and give the user some prettier images to waste their CPU and GPU with.
If done right, the user should get the best experince out of the game without having to go into the settings screen and tweak themselves, and you should have to worry less, as a level designer, about keeping the polygon count the same across difference scenes.
Of course, I say this as a user of games, and not a serious one at that -- I've never attempted to write a nontrivial game.
The main problem I've encountered with variable length frame times is floating point precision, and variable frame times can surprise you in how they bite you.
If, for example, you're adding the frame time * velocity to a position, and frame time gets very small, and position is largish, your objects can slow down or stop moving because all your delta was lost due to precision. You can compensate for this using a separate error accumulator, but it's a pain.
Having fixed (or at least a lower bound on frame length) frame times allows you to control how much FP error you need to take into account.
My experience is fairly limited to somewhat simple games (developed with SDL and C++) but I have found that it is quite easy just to implement a static frame rate. Are you working with 2d or 3d games? I would assume that more complex 3d environments would benefit more from a variable frame rate and that the difficulty would be greater.

Resources