Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have up to several team members that does not work 100% on our team. You might argue that this is a bad idea in the first place, but lets assume we can't do anything about it. I have had a discussion with one of the other team members, and my argument is that the burndown chart is "lying" to us. Let me give you an example.
Lets say we have a sprint, lasting 2 weeks.
We have 6 members, where 2 of them are only working 50%.
If both of the part time members work 100% the first week, and 0% the second week, my argument is that after 1 week, the burndown will look alot better than the reality is. Scrum says that this is the time to add features to the sprint.
Ive seen an alternative way to do this, where you beforehand type in the days you are available, and then have a nonlinear ideal line. My first suggestion was to have placeholders to burn down even if you were not available, but that was shot down pretty quickly.
So I wonder; Should we do anything with the burndownchart? Is the chart even useful? Are there other good practices to overcome this hinderance?
We are currently using Urban Turtle
Regarding the part time developers - obviously, it is not an ideal situation, but there isn't really much of a problem with it. Would Scrum fail if one of your team member wanted to take a day off and would be available for only 32 hours out of 40 in one week? Would Scrum fail if during the week of Christmas nobody would be working? No - on both accounts.
Here's the simplest (and in my opinion best) way to handle your situation: you simply add up the hours that all of the team members will be available for work in that Sprint, e.g. if you have a team of 3, with one member at 100%, and two at 50%, and the sprint is a week, you will add up 40 + 40/2 + 40/2 = 80. That is how many work hours the team has to commit to. It is no different than if you had two full time members.
Regarding the burn down chart - I think that plotting a non-linear "ideal" burn-down is both a waste of effort, as well as misguided. There's a reason it is called ideal. It is not because you must strive to work on that line, but to demonstrate what the burn down would look like if you would (could) work at a constant pace.
Remember the function of that graph - it is there to indicate possible problems in the development. Not every deviation from the ideal is bad. Life isn't ideal, and you are fooling yourself (and harming yourself) if you get worked up over the difference.
In fact, trying to account for every deviation is exactly the predictive method that waterfall famously fails for, and that agile methods try to get away from.
What you may want to do, is to note every major deviation, that you had, understand them and see if there is something you can do about them, and then adapt your process. That is better than trying to model the current state.
So to answer the last question - Are there other good practices to overcome the hindrance - the answer is it is not a hindrance. Overcome it by accepting your reality, and ignoring that which is wasteful.
Your situation is a perfect candidate for using story points over hours. The relative combined effort to complete a story would be more meaningful to your teams ability to deliver value over time, regardless of how much time has been historically spent on similar stories.
There is a very well known anecdote about this situation that turns the situation on its head. Imagine you had a full time team and you knew exactly what hours they could work. Imagine your team had the best scrum practices and you reached a velocity everyone agreed they were happy with. Are they now confined to that velocity forever? Is it conceivable that if you set the same team the goal of delivering the same velocity in less hours and offered the incentive of simply going home early, could it be achieved?
The answer is yes. In fact a real life scenario like this occurred at a major US software house and that team actually got their working week down to 16hrs!! Yes, 16hrs!! They did it by continually fine tuning how they viewed effort. After all, if you take hours to compare stories rather than comparative complexity, how do you factor things like reusable components or cope with unexpected requirement changes from one feature to the next?
Switch to story points, you'll never look back :0)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have just started doing scrum at my company. We are spending a bit of time estimating Effort using planning poker and then when the detailed tasks are worked out a time estimate is put on each task.
The problem we have is that the time estimates are constantly wrong (usually over estimated). Although we can all agree on an effort, getting a team to agree on time for a task is much harder - what takes 1 person an hour might take someone else 3 hours. We end up going somewhere in the middle.
Who should be coming up with the time estimate for a task and when does this happen?
Is this just something we need more practice at, or are we doing it wrong?
The people actually doing the work estimate the cost involved. If you are using raw time as a metric for estimation, Agile methodologies frown on it. Your team should be using an abstraction to estimate cost, such as 'points'. You can start with a rough baseline of 1 hour per point with a minimum of 1 point. Then developers can make raw estimates of how long something should take. Slap them or anyone else on the wrist if they talk in hours or in any other unit of time.
The point is that as development moves along through multiple sprints, project managers can adjust 'point' time estimates provided by the team to match reality -- This can even be done per individual developer. Participants will become better and better at estimation as projects progress. So, since Sprints are an iterative process, time estimates improve with more iterations.
This begs another question: Why are you worried about time? Time is basically cost in the Waterfall model. In Agile, the goal is developing software to VALUE not cost. The reason points are used is that it is an abstract basis of comparison that business owners, project managers and creators (developers) can all view in an abstract light. (Unbiased from different participants' cultural, social or psychological perceptions of time.) Business owners can take a look at available points in a given sprint -- and knowing the points available -- they can elect functionality that is most important. It is always a bit of a tough decision, but again, the goal is to develop toward value and away from time boxing or feature stuffing.
"Who should be coming up with the time estimate for a task and when does this happen?" Depends on how you run your team. Do you let the team members truly self-manage, so tasks are assigned when a person grabs it during the sprint? You may have to keep using the time to complete based on the abilities of an average developer on the team. Do you have a team lead that assigns the tasks to people as they are created during the Sprint Planning meeting? Let the person assigned estimate the time to complete the task.
I agree removing time from the effort estimate is a bit confusing. The big question is: what does it matter that you are overestimating the task time? Is the team sitting around for 4-5 days at the end of a sprint with nothing to do? If so, go to the Product Owner and let her know the team wants to add one or two small items into the Sprint. You don't normally add stuff to an ongoing sprint, but Scrum is a framework to manage work, and as long as the team signs off on adding the new items, there is no need not to let Scrum work for your team....not force your team to work for Scrum.
Also, your questions seems to indicate your team has a greater velocity than what is being planned. If your 2-week sprint (10 work days) has a velocity of 10, but your team is getting finished with everything by day 7, just up your story points on the next sprint to 11 or 12.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
In my workplace we use scenario based testing. However whenever somethign is fixed or a new patch is added new scenarios are added as a result the list keeps getting longer and longer and takes 3 days plus to test the application.
Is there a way to do proper testing without taking a long long time?
What do you use?
Thanks
Only 3 days to test your application ! We've got test jobs that run for maybe 15 days. And I guess other lurkers around here can tell you that they have even bigger test jobs; you know the drill -- when I were a lad we didn't even have a hole in't' road to live in.
But seriously, 3 days to fully test a release candidate with a benefit stream worth O(USD10^7) doesn't seem outrageous to me. On the other hand, if it's taking you 3 days to test changing one field on a GUI from 12 characters to 24 characters, then that does seem a bit too much. I think your question might be better phrased as 'How much of our development time should be spent on testing ?' and the answer might be anything from 10% -- 50% (possibly higher for safety-critical systems). If you are spending 2 days developing a patch, then testing should probably take no more than 1/2 day.
And yes, the scenario where as your application expands your test suite expands too, is very familiar. However if we add a new bit of functionality we tend to add new tests; a better approach, -- one we never have time for though we always have time to deal with not taking it -- is to modify existing tests. Modify code -> modify tests; add new code -> add new tests.
Yes, we use automated testing as much as we can; we use a lash-up of bash scripts, python programs and make to drive our automated tests. The processors we use never complain that testing is boring and repetitive, so we have no ethical qualms about working the poor dawgs close to heat-death. Sadly local labour laws prevent the same robust management principles being applied to the carbon-based life forms in our offices.
CI can help you to achieve that, automation is the key word. For testing process, you have to do is automation testing, UT, interface testing, UIbased testing and performance testing. But there is a root concept needs to be accepted, quality is not equal to testing. UT can be created by RD before coding is finish; UIbased testing and interface testing are develped by QA in the whole coding process. When the new feather finished, there is a test suite to ensure the quality. The only thing you have to do is functional testing which automation testing can not be covered.
I believe you should go for Agile Methodology, this will help you to create small releases and the scenarios wont be as long as they are getting now. Also you can automate few scenarios which are used repeatedly for regression testing.
I also do believe that u should go for Agile . As agile is a combination of iterative & incremental process therefore the Story Points shared by client i.e. requirements & updates. u can sort the Requirement in the order of priority & can plan for sprints i.e. all requirements should be ordered in high to low order as product backlog and sprints can be prepared from the product backlog . Therefore By the time development is in progress for sprint 1 , u can prepare scenarios for testing of sprint 1 in this span . after the sprint delivery if there is any change request in any process follow the same can be managed easily and with the help of scrum & sprint retrospective meetings the process can be improved in the upcoming projects.Thus Project can be delivered in the sprints easily & in a short span of time.
Why don’t you automate your application Test suit? Whenever there is a gap between the current and next release, you can automate the existing test cases in the meanwhile. This will not only save the Testing cycle time but also the Regression Testing will be more accurate without skipping or missing any test scenario.
You can automate at least 60-70% of your total test cases which will save test execution time by a good margin and can be run overnight.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have a developer on my staff that chronically overshoots deadlines, and estimates. On several projects the last week or two everyday I hear "It should be done by the end of the day". This developer does good work.
I have already spoke to him about his problems. He seems genuinely frustrated, and miffed about what to do to correct them.
My Questions are:
What kinds of punishments for passing a deadline are effective?
What ways can I coerce this employee to police his actions (time estimates, etc.,) himself?
UPDATE:
Based on the responses; here's what I have figured out.
Punishment is a bad idea.
It is natural for an employee to be unable to fix estimating problems without intervention.
Don't make deadlines unless there's company consequences (lost contract) for not being done by then.
Utilize available methods (Agile, Joel's checklist) to help the developer estimate better.
Thanks for the links and information. Also thanks for updating my thinking.
I don't think the problem is that he is missing these deadlines.
I think he has a real problem in estimating the amount of time it will take to complete a task.
Have him start keeping a journal of what he says a task will take and how long it actually took him to complete the task. Eventually, this journal will become a sort of guide for him to create better estimates. Once he becomes better at estimating, he shouldn't feel as rushed or harried.
There is an interesting article by Joel Spolsky: Evidence Based Scheduling
1) Break ‘er down
When I see a schedule measured in days, or even weeks, I know it’s not going to work. You have to break your schedule into very small tasks that can be measured in hours. Nothing longer than 16 hours.
This forces you to actually figure out what you are going to do. Write subroutine foo. Create this dialog box. Parse the Fizzbott file. Individual development tasks are easy to estimate, because you’ve written subroutines, created dialogs, and parsed files before.
If you are sloppy, and pick big three-week tasks (e.g., “Implement Ajax photo editor”), then you haven’t thought about what you are going to do. In detail. Step by step. And when you haven’t thought about what you’re going to do, you can’t know how long it will take.
Setting a 16-hour maximum forces you to design the damn feature. If you have a hand-wavy three week feature called “Ajax photo editor” without a detailed design, I’m sorry to be the one to break it to you but you are officially doomed. You never thought about the steps it’s going to take and you’re sure to be forgetting a lot of them.
The main point is that he (and you) should learn from his mistakes, and take them into account on the next estimation.
Also, if you are a developer, I would do regular code review at the end of the day to get a better insight into his development process.
And, of course, smaller iterations and more granularity with tasks. Set the maximum task duration to 1 day. That's the rule we have.
If your first question is
what kind of punishments to be considering
I think you're on a loser straight off. If you feel he does good work you may have to look at the deadlines/estimates and see if they were realistic in the first place. Who set them, if the developer in question was not involved then that may be part of the problem.
I agree with #OTisler that pair programming and possibly a regular end of day progress review with yourself can help him through... although if the deadlines/estimates were unrealistic to begin with thats not where your problem lies.
Closer monitoring on a few specific tasks should highlight where any issues lie.
What kinds of punishments for passing
a deadline are effective?
None. If you anger him, he won't do the work, or he'll find another job. You should help him figure out why his estimates are off. There is a book by steve McConnell about making estimates. i would start there.
What ways can I coherence this
employee to police his actions(time
estimates, etc.,) himself?
By helping him find the right way to make estimates.
First, make sure you are crystal clear in your requirements.
I hate to say it, but in my experience, blown deadlines are just as often a matter of unclear requirements or weak specifications on the part of a supervisor. First thing to do is to make sure the problem isn't either originating with, or exacerbated by, you.
Also, make sure your requirements are realistic, as well as his estimates.
Make sure that your own expectations aren't pushing him to make unrealistic estimates in order to meet unrealistic requirements.
Remember, you do the requirements, but the developer ALWAYS does the estimates, and should not be swayed with "can we do this any faster" unless you are also specifying functionality to be dropped.
Then, make sure he is tracking his time/tasks accurately, so you can get a good view of what is going on with the project.
This process will show any lack of proper time/task tracking, which may end up being the first step to improvement. If you can't see after the project how long a particular item took, that is probably the cause of the problem right there - not enough definition in the estimate, or missing "dependency" tasks that are discovered mid-project, but never estimated.
You HAVE to know how much time was spent doing what, accurately, before you can find out where the creep was, or what can be done about it.
Then, see where his estimates are failing and figure out why. Go over an estimate of a blown project, make that into a project itself - a problem to be solved.
Once you've determined that his estimates are indeed the source of the problem, go over an estimate that went over with him, and perhaps another developer, and figure out why.
This will help you figure out what the cause of the problem is. A solid understanding of the problem will likely be the actual solution.
Lastly, if you actually reach a point where you have to try punishment or coercion, it's time to fire him and start over.
Punishment and Coercion are appropriate responses to willful wrongdoing in certain situations.
However, if this developer is actively trying to do a good job, then you would only worsen the situation by generating negative attitude and frustration.
If the problem can't be solved, and you are sure the problem is with him, and not you, then it's time to fire him and get a developer who can meet deadlines. Great work doesn't mean much when your costs are blown up and profit goes out the window.
Okay, this is fairly common--developers being optimistic. It's the job of Management to deal with it. If anyone should be punished, it's the manager (you?)
I'm glad you at least asked, It looks like you got some good answers off this list, I hope they help and you find a way to actually implement some that work.
When I was young, my first good manager dealt with it this way:
First of all, he had me come up with an itemized list--breaking tasks down to hours, and estimating each one with a very liberal estimate--no period should be less than 4 hours regardless of how small the task was.
Then he looked at them and told me to double all my estimates. (Developers, especially younger developers, don't think about the fact that you are only productive for about 1/2 the day, if you're lucky--and half of that is spent at things you didn't expect to have to do).
Then, before creating his schedule, he doubled all my estimates (Without telling me).
He turned them in this way regardless of schedule requirements from above. A good manager should realize that saying it needs to be done in 2 days, doesn't make it possible.
As I got better at estimating we both noticed and adjusted accordingly.
A managers job isn't just to make a project, it's to build a team. More often than not that's going to require training of some sort. This is also the reason that an engineering manager that is not an engineer is unacceptable, they can't really help with this kind of thing.
Failure of a project or schedule is VIRTUALLY NEVER the fault of the developer (except in a few chronic cases where he isn't really fixable or of any worth and needs to be fired). The manager has made bad decisions either in hiring the developer, trusting him, managing him or staffing the project.
And really, what is fault anyway? I suppose if the manager isn't very good at making the project happen, he's going to need someone to point at... If HIS manager is any good, he'll ask why it got this far, what you did to fix it, etc.
Hiring a manager is hiring someone to solve the problems. To make the developers productive. If he can't make them productive, he isn't the right person.
To your questions:
If you choose to punish people for missing deadlines you will not get good results. They will be demotivated and feel belittled. If you keep pushing people to meet deadlines the quality of work will suffer and you will end up with a lot of time spent bug fixing afterwards.
To improve his time estimates you could try using Joel Spolsky's evidence based scheduling which has a nice feedback loop to improve the resulting estimates.
But I have some questions that I think you need to think about.
Is he later than everybody else? If so why - is it because he is an over optimistic estimator or a slow worker? Over optimistic estimates are easy to fix - just multiply all his numbers by a factor as per evidence based scheduling above. If he is a slow worker why? Does he get distracted? Is he very careful to produce very low defect code? Is he over engineering solutions? Is he not re-using code effectively?
Do the deadlines matter, or are they just arbitrary dates based on the estimates for the purposes of reporting progress up the management hierarchy? If the latter you can solve this by tweaking his estimates yourself.
What kinds of punishments for passing
a deadline are effective?
You stated the point and missed it. The obvious punishment for passing a deadline is death. If the developer is still alive after passing a deadline the "deadline" obviously was not a real deadline. Do you think it's funny to put developers under pressure using martial language?
Fix your wording.
Motivation
First of all: Read Peopleware
Next. Why do you think punishment will be an effective way to manage people that is supposed to be creative? I think you have to rethink the whole approach to management vs. team.
As I see it the managers first, and most important, role is to make sure that the developers can be creative and productive. Not that they are productive. There is a big difference in those small words. To be creative you need a safe environment. By being constantly under pressure from both deadlines and threats of punishment you create the exact opposite of safe.
Also, as a manager, you need accurate information on which to base decisions. This also requires a safe environment. If there is a risk for punishment for being honest and outspoken you are guaranteed to get lies and absence of information. A very dangerous base to take decisions from.
Estimates
As other as pointed out, estimates are estimates. In our team we don't do any individual estimates at all, we do estimates as a team. (I'm a bit reluctant to call what we do Scrum, but most of it tries to emulate if nothing less) I think this is a really great way to do estimates: Each team member is given a deck of cards consisting of numbers 0,1/2,1,3,5,8,13,20,40,60,100 and when estimating a task each developer picks a card (the cards are hidden until everyone has picked a card to avoid influencing estimates) and the average of the selected cards is taken as the estimate.
Notice how the numbers gets progressively less accurate. This is by design because large estimates are by necessity less accurate.
For our team we have opted to use the unit "ideal man days" for estimates. As far back as any of us can remember an ideal day hasn't occurred yet, but it is a good basis when you know how to translate calender days to "ideal man days".
As Scrum prescribes, development is done in sprints of two weeks after which the new version is deployed in the production environment. After each sprint we take the sum of the estimates of the completed tasks and divide that by the planned man days for the sprint. This factor is then the basis of estimating how many "ideal man days" the team can spend in a two week period.
Actual work items done by an individual developer don't need an estimate. The first approximation is always 1/2 - 1 day to complete. If this estimate turns out to be false you just grab a fellow developer and do it together to get it done. Or you break down the work item in smaller tasks so it can be distributed better.
Set Milestones and try Agile as #OTisler suggested.
I don't think you should punish him. Just get him to understand how to make accurate estimates.
As a team lead I've had my team members tell me that it will be "no problem" to finish X feature by the deadline. Then I usually sit down with them and go over what tasks and sub-tasks I think need to be done in order for the feature to be finished, and how long the developer thinks each will take.
After we do this exercise, and add up all the task and sub-task estimates, it will inevitably take much longer than the developer thinks in their original estimate. I usually only have to do this exercise with them a few times before they start making more accurate estimates.
What amazes me is that you only have one of these guys.
Engineers are horrible at estimating how much time something will take. I bet if you look carefully at your other developers' estimates, you'll find a lot of padding. Sometimes the padding isn't necessary, but the task expands to fill the available time anyway.
The solution to this is to change around how you do estimates - for everyone. Developers may be bad at estimating absolute time, but they're pretty good at relative time. So on Monday, instead of "how long will it take to add a whoosiwhatsit?," ask "what can you get done on the whoosiwhatsit in less than a week?" That becomes their task for the week.
The following Monday you look at how it went. "Well, I got the floogle installed in two days but it turns out it impacted the mcphee...so this week I need to decouple those guys so the whoosiwhatsit files don't get overwritten." Ok, there's their task for the week.
You might think it won't help, because you still don't know when the whoosiwhatsit is going to be ready. That's true. You have two choices here:
If you need a deadline, then you have to force your errant developer to pad his estimates like everyone else. It won't take him long to get the hang of it, and in no time at all he'll be taking "2 weeks" to write something that should have taken a day.
Your other choice is to trade the fictitious estimates for more visibility. In the long run this approach gets you more productive and much happier engineers.
So the developer does good work, but is poor at estimating the amount of time for delivery? I'm not sure you have a punishment situation on your hands just yet.
Maybe going forward for some time, have him walk you through his process for estimating a delivery point. This can be an opportunity to ask him why steps X,Y, and Z take certain amounts of time. He may find himself revising his estimates simply by doing the exercise at what is almost certainly a slower pace.
ask yourself this: What entails your job?
If you're just blindly passing estimates from developers (who you know can't give good estimates) up the management line, and not deciding for yourself whether that estimate is achievable, then you're not doing your job.
Try to think in terms of "value-add" (One of my old employers used that term a lot , and I hated it, but it probably works for you in this situation). What value are you adding? If you're just passing stuff in both directions between upper management and the developers, then ultimately you're not earning your money. You could be removed , and nothing would change.
The best manager I ever had was one that looked through a set of requirements given to him by another team , and told them straight out that almost a third of them was bull, and had them removed, before I ever even saw the list. The worst one I ever had made me write all this extra management-type documentation which none of the other managers I'd ever had asked me to do (I really got the impression I was literally doing his job for him), didn't even give me project due dates, and hardly turned up to work. They were both in the same company , bizarrely enough.
90 hours is one common short project deadline. The easy way is instead of estimating "your time", you measure another. Computer programmers shoudn't make time estimates for their projects since evidence shows calculating one's own time results in larger error than observing another.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm trying to reteach myself some long forgotten math skills. This is part of a much larger project to effectively "teach myself software development" from the ground up (the details are here if you're interested in helping out).
My biggest stumbling block so far has been math - how can I learn about algorithms and asymptotic notation without it??
What I'm looking for is some sort of "dependency tree" showing what I need to know. Is calculus required before discrete? What do I need to know before calculus (read: components to the general "pre-calculus" topic)? What can I cut out to fast track the project ("what can I go back for later")?
Thank!
Here's how my school did it:
base:
algebra
trigonometry
analytic geometry
track 1 track 2 track 3
calc 1 linear algebra statistics
calc 2 discrete math 1
calc 3 (multivariable) discrete math 2
differential equations
The base courses were a prerequisite for everything, the tracks were independent and taken in order.
So to answer your specific question, only algebra is needed for discrete. If you want to fast track, do one of these:
algebra, discrete
algebra, linear algebra, discrete (if you want to cover matrices first)
HTH... It about killed me when I returned to school and took these, but I'm a much better programmer for it. Good Luck!
My advice is to lazily evaluate your own dependency tree. Study something you think is interesting -- when you hit something you don't know, go learn about it.
I always find it easier to learn something new when I already have a context in which I want to use it.
This is a particularly cool site for visualizing how everything in the math world fits together:
http://www.math.niu.edu/Papers/Rusin/known-math/index/mathmap.html
It's also got short summaries of many subfields you've probably never heard of, which is fun.
Usually, an overview of each field is a good thing to have when looking at any topic, but it's rare to have a genuine dependence the way we'd think of it. Algebra is always needed. I can't think of a time I've needed any trigonometry. (except to expand it with new things from calculus) I'm even quite sure people wouldn't agree on what a dependency graph would look like, or even in which field each topic belongs.
I think the right way to approach it is to just collect a wide range of topics from all of branches and read them in whatever order you feel like, recording dependencies between topics as you go. (respecting them, or not, as you please.) This should have the far more important property of keeping the student interested.
It's also my experience that if something just has you stumped, just mark it and set it aside for later.
As for my school, well, it was similar to Harrison's:
cominatorics,
linear algebra,
calculus,
numerical analysis (error analysis in particular.)
logic,
statistics, (with operations research / queueing therory.)
Take a look at MathWorld. Browse topics or search for one, you'll get your position in the overall tree.