How to force tasks in Microsoft Project to be "scheduled" based on priority and resource assignment? - ms-project

I needed to do some higher-level project planning that doesn't really fit into the workflow of our day-to-day task management tools (FogBugz and whiteboards), so I figured I'd give MS Project a whirl (it being free through MSDN).
I've hit a pretty solid wall, though. What I have is about 120 tasks, a set of people (referring to them as "resources" is amazingly harsh to me, but I digress), and a rough prioritization of those tasks. Some tasks have a person assigned to them, some don't (simply because we don't know who's going to do what yet).
Fine so far. The problem is that, except in those relatively rare instances where tasks are linked (most of the work involved can be done in any order), all of the tasks are scheduled to run concurrently. What I'd like to do is have Project figure out some scheduling scenario based upon:
the defined tasks
their relative priority
any links/dependencies, if defined
the availability of the people that I've defined, while respecting the explicit "resource" assignments I've already made
Is this possible? I've fiddled with the resource leveling dialog and read more MS Project documentation than I'd care to admit, so any suggestions are welcome.
FWIW, I noticed in my searches this question on Yahoo Answers; the person there seems to be after roughly the same thing, but I figured asking here might be more fruitful.

After some further experimentation, I've found a partial solution for my own question. If you:
assign a person to each task
specify in the Advanced tab of the Task Information panel that all tasks should (select all your tasks and click the Task Info button to update these properties for all tasks):
use a calendar (called "standard" in my project file)
not ignore resource calendars when scheduling
have a constraint of As Soon As Possible (which is the default, I believe)
Choose Level Resources from the Tools menu, and specify:
Look for overallocations on a Hour by Hour basis
a leveling order of "Priority, Standard" (which rolls in the relative Priority values for each task you've defined when setting the schedule)
Click "Level Now" in that leveling resources dialog, and all of the tasks should be rescheduled so that they're not running concurrently, and that no one is "overscheduled".
You can ostensibly have Project automatically reschedule things as tasks are added, edited, etc., but I suspect that would result in chaos, as there's nothing about the resource leveling process that makes me think it's "stable" (e.g. that two levelings performed back-to-back wouldn't yield the same schedule).
It would be nice if Project would "fully allocate" whatever people you have configured, so that you don't have to assign people to tasks just to have those tasks scheduled in a way that is consistent, if not correct. Any thoughts on that front would be most welcome.
That seems (and feels!) like a lot of work, but I think the result is relatively decent -- a super-high-level view of a project that allows for a high degree of day to day flexibility, but still affords one a way to reasonably make plans around "interdisciplinary" activities (e.g. once this is done, we need to buy those four servers, make sure our legal stuff is taken care of, and pull the trigger on that marketing push one week later, etc).

Related

How to create task in Scrum? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We use scrum with our development, we often create task/ticket for developer, and I want to find a way
to record them. But I hava a refused question, that is one way to record them. one way is write on
whiteboards, the other way is write on Agile project management tool(Pivotal Tracker), I think they
are duplicate, so which is better?
It depends who cares about the tasks.
In teams very new to Scrum, devs can split stories in to tasks to get a better idea of estimates, collaborate on work, etc. For this reason, whatever the devs prefer should be the way forward. Usually a dev will prefer to put tasks on a card, or a whiteboard, or something close to the workspace, but some devs do prefer electronic systems. I find the act of moving a card or writing on a board gives a sense of commitment to a task or story, so I prefer this.
Sometimes the PM prefers to have the tasks so that he can see if a story is 65% done, etc.
Every single time I've seen this it ends up with the PM telling the devs off for not finishing their stories when they said they would, or saying, "It was 85% done yesterday! How can you not have finished it?" This happens a lot with new teams, where devs often prefer to do the easy bits first, or they don't know how to integrate their work with others' yet.
The thing is, there is no value whatsoever in the tasks! It's only possible to get useful feedback by delivering the stories, even if they don't represent completed features but just slices through the system. The tasks themselves are only valuable for the iteration until the stories are completed, so no historic record is needed. PMs who value the tasks often end up with part-done stories and nothing to release or showcase.
For this reason, I would try not to duplicate the tasks for my recording efforts, but just to let the devs make the tasks themselves and put them wherever they want to. It's easy enough to count tasks manually for a burn-down.
I'd have to disagree with the previous answer of there not being any value in the tasks. I myself prefer the electronic methods such as:
- Calenders : Not only do they say what needs to be done but also when and how long it might take
- Task List : Just like the traditional todo list.
- Scope Items : Turning the items in the scope spreadsheet into deliverables.
Having physical tasks on cards (tried that) or on the whiteboard in the LLP (did that for a while) is technically better, because you're able to always get to the information quickly. However if your development team is distributed, especially when then PM is in another part of the world, you're going to end up having to duplicate data electronically. The tasks themselves add value to the development house in that they provide good historical data about how long certain tasks take. This information is extremely valuable in building the Scope Matrix of future projects, and as such affect the costing and delivery time. As a side benefit, you'll be able to see by historical trend which asset (i.e. developer) is able to perform and at what efficiency. E.g. If you give a developer a Database task to do and they were inefficient then you'll know next time that database tasks should either be given to someone else or during the down time between projects, said asset should spend time upgrading the database skills.
So important is historical task recording that sometimes clients will ask to see the tasks and how long they took as verification of "the bill". When clients are charged by the development house's hourly rate for work, they want accountability for every hour (or part there of) spent. We used to fill out these sheets with the tasks and the durations to send along with the invoice to the client; and sometimes they would question it.

Best practice for Scrum "done" concept in JIRA [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I work at a small service based company where we are starting to implement Scrum practices, and we are also starting to use JIRA with greenhopper for issue tracking. Our team has defined "done" as:
coded
unit tested
integration tested
peer reviewed
qa tested
documentation updated
I'm trying to figure out whether this should be done using a separate issue for each item in the above list for each "task", or if some of these items should be implemented in the ticket workflow, or if simply lumping them together in one issue is the best approach.
I'm disinclined to make these subtasks of a task, as there is only one-level nesting of issues and I fear there is a better use for that capability.
I also am not too excited about modifying the workflow, as this approach has proved to be a burden for us in other systems.
If all of these items are part of the same ticket then that seems weird to me because the work is likely spread between multiple team members, and it'll be hard to make tasks that are under 16 hours that include all of those things.
I feel like I understand all of the issues, but as of yet I don't know what the best solution is.
Is there a best practice? Or some strong opinions?
Done is done - it has to be all those things you defined, however treating them as steps explicitly with a bug tracker can have the undesired side effect of encouraging divisions within then team and throwing stuff over the wall. So coders would claim they are done once ticket is marked "coded" and "unit tested", testers when marked tested etc.
This is exactly the opposite of what Scrum intends to do - the whole team commits to doing the stories so that they meet the definition of done in the end. So even though some of the elements of achieving done are indeed steps one should be very careful with solidifying these steps in any kind of defined workflow.
(This btw shows nicely why using a bug tracker as a scrum tool is a bad idea. Those are different tools that should be optimized for different things - even if linked together through some APIs.)
I certainly wouldn't nest them, since they are steps common to each task. Making them subtasks would just increase the complexity and boilerplate of the system. These seem like perfect workflow stages to me.
Something like Submitted->Assigned->Coding->Review->Testing->Finished.
Where Coding requires "coded", "unit tested", and "integration tested" before moving to Review, Review requires Peer Review before moving to Testing, Testing requires QA Testing before moving to Finished.
The only reason this would be tricky is if you're allowing Peer Review and Testing to be done in parallel. I see problems with allowing that, since if the code fails peer review and is subsequently changed it invalidates the testing work done by QA.
coded
unit tested
IMHO these belong together, as both should be handled by the same person (preferably TDD, which really makes it impossible to separate these).
integration tested
In our team, this is usually done by the same developer, so we typically do it as part of the above task. Other teams may do it differently.
commented
Do you mean code comments? Then, to me, this does not deserve a separate task. Otherwise, please clarify.
peer reviewed
A separate task for a separate developer (or more).
qa tested
A separate task for testers / QA personnel.
I would add documentation - it may not always be needed, but often is. Again, it should be a separate task, typically for the same guy who did the implementation (but not always).
One prime concern to practically all the Scrum teams I have been working with so far is to make sure that nothing important is forgotten from the above. Partitioning into distinct tasks may help this. Then you can clearly see in your backlog what's left to do. Lumping all of these into one task makes it easy to forget about this or that little detail. For us, it was most typical to forget about code review and documentation, that was the main reason why we turned these into independent tasks.
Done defines what the Team means when it commits to “doing” a Product Backlog item in a Sprint. Some products do not contain documentation, so the definition of “done” does not include documentation. A completely “done” increment includes all of the analysis, design, refactoring, programming, documentation and testing for the increment and all Product Backlog items in the increment. Testing includes unit, system, user, and regression testing, as well as non-functional tests such as performance, stability, security, and integration.
Reference: Scrum Guide - Written by Ken Schwaber and Jeff Sutherland (Inventors of Scrum)
You state that you are following "Scrum Practices". It sounds to me like you are just using a few parts of the Scrum Framework and not others, is that true? First of all, Scrum is not necessarily a practice, it is a Framework, you either use the framework or you don't. It works on the basis of inspect and adapt, so apart from the basic Scrum framework rules, nothing is set in stone, so you won't get an exact answer to your question. The best way to know the answer is hire experienced Scrum Professionals, and Experienced Developers and Testers and try the above done plan in your Scrum Team.
Remember always Inspect and Adapt. There are three points for inspection and adaptation in Scrum. The Daily Scrum meeting is used to inspect progress toward the Sprint goal, and to make adaptations that optimize the value of the next work day. In addition, the Sprint Review and Planning meetings are used to inspect progress toward the Release Goal and to make adaptations that optimize the value of the next Sprint. Finally, the Sprint Retrospective is used to review the past Sprint and determine what adaptations will make the next Sprint more productive, fulfilling, and enjoyable.
Do not spend loads of time documenting or looking for a solution to a given Process problem because most of the time the problems change faster than you would realize, it is just better to inspect and adapt provided you have at least the basic knowledge of scrum and you are using the Scrum framework and not just a few Scrum like practices.
We use a pretty similar system in JIRA and I have an open question here and on the Atlassian boards asking a very similar question. We have a similar definition of done. We create the main story in descriptive form i.e. "The legend text on the profit and loss graph overlaps". We then define sub-tasks which are either of type 'technical' or 'process'. Technical tasks are the actual work of implementing the story "Research possible causes on vendor site", "Implement fix in the infographic class". Process items include 'Peer Review', 'Make Build', 'QA Testing', 'Merge'. As one comment noted you may have QA going on before/during Peer Review. As a part of the Scrum process we have QA going on nearly all of the time (they are part of the team) sometimes they sit with the developer, sometimes they get 'bootleg builds' to run in a test environment. This is exploratory testing and is considered part of the coding process to us. The sub-task for 'QA Testing' is for integration and regression testing and is a final validation of the whole story after Peer Review is completed. By that time the QA team already has a complete test plan they worked up during exploratory testing and it's typically just a matter of running through the plan and 'checking it off'.
We've gotten to this point after running sprints for a year and making changes during the retrospective. I'm open to suggestions as I think one of the downsides to the retrospective is that you can group-think yourself in one direction with little hope of ever backing all the way out and considering a different path.
We use two boards for this purpose. We have one board for the Development Sprint where "Done" is Ready for Testing. You can't enter a sprint unless you're well and truly ready to start development (all analysis done, estimates done, people know what they are supposed to be doing - all the conversations have been had, shall we say, though our conversations tend to take place in JIRA Comments given the distributed team) ... and you exit when you finish development. That's the best way to track whether our development team is meeting their own goals without being impacted by QA. Meanwhile, QA uses a Kanban style board and they go from "Ready for Testing" (this is their "to-do"), through In Testing to Ready for Release.
We switched to this because we previously had all these steps in a single board, and we weren't "meeting our commitments" within any sprints because there was no way to both develop & test all in a single sprint, where we have to do a code migration to the QA environment for final testing to occur, although testing is happening all along the way. We are still trying to figure out how to do things correctly, so this may not be the right answer, and yet it sounds like it's not something you've thought of, so maybe it would work for you.
and it'll be hard to make tasks that are under 16 hours that include all of those things.
This is your real issue; ability to break down stories into small useful vertical slices of functionality. Working on this will make your team more agile and give the PO more flexibility.
To the contrary, breaking down the work by process/mechanical step will only make you less agile and really serves no useful purpose. Either you are done or you aren't; no one cares if you are dev complete and not tested so don't bother tracking it by the hour....its waste.
Refocus on your stories, not on tasks.
We use subtasks.
Given that the story is a shared item (the whole scrum team works on it), we use the subtasks as 'the post-it notes' allowing to track tasks which individuals need to tackle.
We don't require that every little piece of task is represented as a subtask.
We are not bookkeepers, but developers.
The team agreement is that if you can't take up a task immediately, just jot it down as a subtask to the story. (Using the agile plugin, it is really easy). ie. we will never have systematically a subtask 'create unit test', but in some occassions, when someone is struggling to get that dynamock up and running, you will see this subtask popup in the story. Having it there allows the team to discuss it during the scrum.
If you want to generate the checklist automatically, look at the create subtask on transition plugin.
https://studio.plugins.atlassian.com/wiki/display/CSOT/Jira+Create+Subtask+for+transition
It allows you to automatically add the subtasks when the story has been committed.
BTW - JIRA is more than a bug tracker. We are using it in a wide variety of applications,
including the management of our sprint activity. (as an Atlassian partner, I'm biased :-).
Francis
Important thing is that you use sub-task as real task; not as activity of main task. Issue tracker is primarily meant for what you are doing; not how you are doing and in what order.

How to increase my Web Application's Performance?

I have a ASP.NET web application (.NET 2008) using MS SQL server 2005. I want to increase the performance of the web site. Does anyone know of an article containing steps to do that, step by step, in SQL (indexes, etc.), and in the code?
Performance tuning is a very specific process. I don't know of any articles that discuss directly how to achieve this, but I can give you a brief overview of the steps I follow when I need to improve performance of an application/website.
Profile.
Start by gathering performance data. At the end of the tuning process you will need some numbers to compare to actually prove you have made a difference. This means you need to choose some specific processes that you monitor and record their performance and throughput.
For example, on your site you might record how long a login takes. You need to keep this very narrow. Pick a specific action that you want to record and time it. (Use a tool to do the timing, or put some Stopwatch code in you app to report times. Also, don't just run it once. Run it multiple times. Try to ensure you know all the environment set up so you can duplicate this again at the end.
Try to make this as close to your production environment as possible. Make sure your code is compiled in release mode, and running on real separate servers, not just all on one box etc.
Instrument.
Now you know what action you want to improve, and you have a target time to beat, you can instrument your code. This means injecting (manually or automatically) extra code that times each method call, or each line and records times and or memory usage right down the call stack.
There are lots of tools out their that can help you with this and automate some of it. (Microsoft's CLR profiler (free), Redgate - Ants (commercial), the higher editions of visual studio have stuff built in, and loads more) But you don't have to use automatic tools, it's perfectly acceptable to just use the Stopwatch class to time each block of your code. What you are looking for is a bottle neck. The likely hood is that you will find a high proportion of the overall time is spent in a very small bit of code.
Tune.
Now you have some timing data, you can start tuning.
There are two approaches to consider here. Firstly, take an overall perspective. Consider if you need to re design the whole call stack. Are you repeating something unnecessarily? Or are you just doing something you don't need to?
Secondly, now you have an idea of where your bottle neck is you can try and figure out ways to improve this bit of code. I can't offer much advice here, because it depends on what your bottle neck is, but just look to optimise it. Perhaps you need to cache data so you don't have to loop over it twice. Or batch up SQL calls so you can do just one. Or tighten your query filters so you return less data.
Re-profile.
This is the most important step that people often miss out. Once you have tuned your code, you absolutely must re-profile it in the same environment that you ran your initial profiling in. It is very common to make minor tweaks that you think might improve performance and actually end up degrading it because of some unknown way that the CLR handles something. This is much more common in managed languages because you often don't know exactly what is going on under the covers.
Now just repeat as necessary.
If you are likely to be performance tuning often I find it good to have a whole batch of automated performance tests that I can run that check the performance and throughput of various different activities. This way I can run these with every release and record performance changes each release. It also means that I can check that after a performance tuning session I know I haven't made the performance of some other area any worse.
When you are profiling, don't always just think about the time to run a single action. Also consider profiling under load, with lots of users logged in. Sometimes apps perform great when there's just one user connected, but when they hit a certain number of users suddenly the whole thing grinds to a halt. Perhaps because suddnely they are spending more time context switching or swapping memory in and out to disk. If it's throughput you want to improve you need to be figuring out what is causing the limit on throughput.
Finally. Check out this huge MSDN article on Improving .NET Application Performance and Scalability. Specifically, you might want to look at chapter 6 and chapter 17.
I think the best we can do from here is give you some pointers:
query less data from the sql server (caching, appropriate query filters)
write better queries (indexing, joins, paging, etc)
minimise any inappropriate blockages such as locks between different requests
make sure session-state hasn't exploded in size
use bigger metal / more metal
use appropriate looping code etc
But to stress; from here anything is guesswork. You need to profile to find the general area for the suckage, and then profile more to isolate the specific area(s); but start by looking at:
sql trace between web-server and sql-server
network trace between web-server and client (both directions)
cache / state servers if appropriate
CPU / memory utilisation on the web-server
I think First of all you have to find your Bottlenecks and then try to improve those.
This helps you to perform exactly where you have serios problem.
An in addition you needto improve your Connection to DB. For exampleusing a Lazy , Singletone Pattern and also create Batch request instead of single requests.
It help you to decrease DB connection.
Check your cache and suitable loop structures.
another thing is to use appropriate types, forexample if you need int donot create a long and etc
at the end ypu can use some Profiler (specially in SQL) andcheckif your queries implemented as well as possible.

Issue tracking applications that subdivide issues into (sub)tasks?

I am looking for an issue tracking application, which has two levels in its task hierarchy. This is because I find myself very often creating informal "TODO" lists within my issues. It seems to me that a FEATURE is usually bigger than a TASK - one feature usually requires several things to be done - e.g. "check if this sth will affect efficiency", "add the control in the GUI", "implement new extension to the core engine", "update documentation". Without stating all these sub-task, I find it impossible to estimate the time needed and the real complexity of the complete task.
I know I could create several issues, but it is often not feasible because these sub-tasks:
are related to a single feature from the user's perspective,
can be tested only together, when everything is done,
have the same developer assigned-to,
should be displayed together all the times,
should have only two states: todo or done.
Do you know any (commercial or not) applications that allows this? I am not just interested in hierarchies of issues or issue linking, but I need something with full issues on one level and with smaller and quicker "todo" lists on another one.
I've used both Jira and TeamTrack in previous jobs, and they both have sub-tasks.
Some suggestions:
(*) FogBugz - fogbugz.com
"FogBugz allows you to create subcases to represent lower-level tasks."
(*) IssueTrak - issuetrak.com
Solid issue-tracking system that I can recommend.
(*) CounterSoft's Gemini - countersoft.com
Feature-rich, much like Jira. Looks very promising.
Look also for project management systems for developers - these systems handle "projects" with "tasks".
/Kristoffer :-)
you could also use mantis:
you can relate issues, including as sub-issues
sub-issues can block a parent issue from being set to fixed until all sub-issues are fixed
you could have feature/ parent issues in a separate project
Yes, Jira and Fogbugz have subtasks. I have found however no application, where subtasks were something smaller and quicker than tasks - I still have to repeat all the fields of the main task.
I ended up using Project Kaiser and I am satisfied with it. It has quite nice hierarchical subtasks.
I tend to use ToDoList for my personal tasks list, since it is a very simple and focused tool.
I know it doesn't exactly fit what you're searching (one application that does everything), but I use it effectively in coordination with our enterprise-grade bug tracking system.

How to make a build (java) as "CM-independent" as possible? (CM=Configuration Manager)

I have been thinking of making one of the project builds I handle, as "independent" of me(CM)as I possibly can. By this I dont just mean automation via scripts/tools - although it definitely includes it. This is a project subject to much chaos and so "total" automation would not be realistic.
Here is what I'm aiming for:
Anybody should be able to do the build (with some automation and a bit of documentation/guidelines) - for instance - a newbie CM, or even a developer with no CM experience.
My first thoughts are to achieve this by:
Nailing the Build request process (via build forms which capture ALL details required for the build so that nothing falls down the cracks just because its in someone's head)
Simplifying the build steps so that they can be captured in a simple documentation as a sequence of commands - a trained monkey should be able to run with the build (well.. not hurling insults but - you get the idea :-) )
Using the tool's features to the hilt (read ANT,SVN) such that the potential issues are caught well in advance and also help provide better alerts in case of failures/issues.
Having the freedom to fall ill or take those occasional holidays without the project manager getting panic attacks everytime i mention a couple of days' off. :-)
I'd be glad to have some thoughts and ideas to help me in this direction. Thanks all!
At Urbancode, we refer to this as the "Bob the Builder" anti-pattern. The good news is that Bob (you) wants to get out of the loop. When the build guy can't go on vacation or get sick without parts of the process grinding to a halt, there really is an unacceptable problem. If I'm a betting man, as you start the process of simplifying the process down to "trained monkey" levels, you'll wonder why you're spending your time doing this rote stuff when you're smart and could actually be adding value somewhere.
The symptoms of "Bob the Builder" syndrome in our book:
All requests for builds, or builds of a certain type, go through an individual or small team.
Response to these build requests is annoyingly slow for developers. If the build team is at lunch, they wait hours.
Bob, or the team of Bobs, spend a significant percentage of their time doing rote tasks.
The Bobs going home for the day, going to lunch, going on vacation, or getting sick impede the ability of the team to get things done.
We tell our AnthillPro customers to push all of this kind of stuff into their automation. Having two build types that use different machines, different build numbers, etc shouldn't be a problem.
The first step is to dumb down the process. Drive as much complexity out as possible so that you can get down to the "trained monkey" process. Once you have something approaching that, replacing the monkey with a computer is pretty easy.
I'd give more specific advice, but I don't think you've told us where the complexity comes from, other than chaos. Sometimes in this situation you need to attack chaotic and bad practices. Are you doing builds that are "This baseline in source code and those two files and these three files?" That would be tricky and probably need a CMer in the loop. Find a way to forbid it. Replacing that with "Create a branch, and make specific changes to that branch" makes constructing the build doable by that monkey.
You should be able to argue for those changes as high risk. Even though you are good, you will have bad days and want to take human error out of the equation as much as possible. At the same time, if you're shooting for faster response to the developers and self service (which presumably development and management want) some things will need to be made automateable / monkeyable.
Having better forms can be good in the interim, and using your tools well is always good but I would attack the the "trained monkey" problem pretty aggressively. Anything that can't be done by a trained monkey (or a computer) should be a candidate for leaving the process. Once you have it down to "trained monkey" status, get your build automation in place so neither you nor the developers need to be monkeys. That changes your role from "Bob the Builder" to "Bob the Build System Owner".
Simplifying the build steps so that they can be captured in a simple documentation as a sequence of commands - a trained monkey should be able to run with the build (well.. not hurling insults but - you get the idea :-) )
If that is possible than it should be possible to run the build in one step via a script (mayy it be an ant, bash, maven or whatever script). That should be the goal, so basically anybody can do the build.
The goal of developing a build process should be this:
Start with an empty directory anywhere (tabula rasa, if you will)
Make sure a very small tools of basic tools is intalled (for me that's usually Java + Maven + SVN command line client)
Check out a single directory from your SVN/CVS/...
Start a single command (and that means something that doesn't have 25 parameters)
Wait (possibly quite a while)
Have your complete build
If you can't do that, then your build process is still not good enough.
If you think that you can't achieve it, then describe in detail which actions you need to do in addition to that list are not possible to do by a turing-complete machine.
Usually there isn't such a point. It's only the missing tools/know-how/motivation. I, personally, found out that it's easier to do this, than to describe why it can't be done.
Good luck.

Resources