I am looking for an issue tracking application, which has two levels in its task hierarchy. This is because I find myself very often creating informal "TODO" lists within my issues. It seems to me that a FEATURE is usually bigger than a TASK - one feature usually requires several things to be done - e.g. "check if this sth will affect efficiency", "add the control in the GUI", "implement new extension to the core engine", "update documentation". Without stating all these sub-task, I find it impossible to estimate the time needed and the real complexity of the complete task.
I know I could create several issues, but it is often not feasible because these sub-tasks:
are related to a single feature from the user's perspective,
can be tested only together, when everything is done,
have the same developer assigned-to,
should be displayed together all the times,
should have only two states: todo or done.
Do you know any (commercial or not) applications that allows this? I am not just interested in hierarchies of issues or issue linking, but I need something with full issues on one level and with smaller and quicker "todo" lists on another one.
I've used both Jira and TeamTrack in previous jobs, and they both have sub-tasks.
Some suggestions:
(*) FogBugz - fogbugz.com
"FogBugz allows you to create subcases to represent lower-level tasks."
(*) IssueTrak - issuetrak.com
Solid issue-tracking system that I can recommend.
(*) CounterSoft's Gemini - countersoft.com
Feature-rich, much like Jira. Looks very promising.
Look also for project management systems for developers - these systems handle "projects" with "tasks".
/Kristoffer :-)
you could also use mantis:
you can relate issues, including as sub-issues
sub-issues can block a parent issue from being set to fixed until all sub-issues are fixed
you could have feature/ parent issues in a separate project
Yes, Jira and Fogbugz have subtasks. I have found however no application, where subtasks were something smaller and quicker than tasks - I still have to repeat all the fields of the main task.
I ended up using Project Kaiser and I am satisfied with it. It has quite nice hierarchical subtasks.
I tend to use ToDoList for my personal tasks list, since it is a very simple and focused tool.
I know it doesn't exactly fit what you're searching (one application that does everything), but I use it effectively in coordination with our enterprise-grade bug tracking system.
Related
Hi I am building a small app to get used to Meteor (and Mongo). Something that is bothering me is the data modelling aspect. Specifically what is the best way to model a many to many relationship. I have read in the Mongo docs that a doc should not be embedded in another doc if you expect it to grow while the original doc remains fairly static.
In my test app students can register for courses. So from the Mongo perspective it makes sense to include the students as an embedded doc in the course as each course will have a limited number of students as opposed to the other way round where, over time, a student could theoretically join unlimited courses.
Then there is the Meteor aspect, I read that a lot of Meteor's features are aimed at separate collections, such as DDP working at the document level so any change in the student array would cause the entire course doc to be resent to every browser, and things like the each spacebars helper works with Mongo cursors but not with arrays etc, etc.
Has anyone dealt with a similar situation and could they explain what approach they took and any drawbacks they had to deal with etc? Thanks.
See this article: https://www.discovermeteor.com/blog/reactive-joins-in-meteor/
And test how good your possible solutions are with this https://kadira.io/
Better use the guide:
http://guide.meteor.com/data-loading.html#publishing-relations
The Meteor team tames (or hides!) the javascript monster to an amazing extent. By using their conventions, you get "free" a ton of much-used functionality "out of the box". Things that are usually re-invented over and over again, accounts, OAuth, live data across clients, standard live-data protocol etc.
But very soon... you need features not in the box. Wow... look at all the choices. Wait a minute, this is the same monster you were fighting before Meteor!
So use the official Meteor Guide. They recommend the wisest ways to extend functionality of your app when you make these choices.
Since they know how they have "hidden the monster", they know how to keep avoiding the monster when you extend.
If we had a defined hierarchy in an application. For ex a 3 - tier architecture, how do we restrict subsequent developers from violating the norms?
For ex, in case of MVP (not asp.net MVC) architecture, the presenter should always bind the model and view. This helps in writing proper unit test programs. However, we had instances where people directly imported the model in view and called the functions violating the norms and hence the test cases couldn't be written properly.
Is there a way we can restrict which classes are allowed to inherit from a set of classes? I am looking at various possibilities, including adopting a different design pattern, however a new approach should be worth the code change involved.
I'm afraid this is not possible. We tried to achieve this with the help of attributes and we didn't succeed. You may want to refer to my past post on SO.
The best you can do is keep checking your assemblies with NDepend. NDepend shows you dependancy diagram of assemblies in your project and you can immediately track the violations and take actions reactively.
(source: ndepend.com)
It's been almost 3 years since I posted this question. I must say that I have tried exploring this despite the brilliant answers here. Some of the lessons I've learnt so far -
More code smell come out by looking at the consumers (Unit tests are best place to look, if you have them).
Number of parameters in a constructor are a direct indication of number of dependencies. Too many dependencies => Class is doing too much.
Number of (public) methods in a class
Setup of unit tests will almost always give this away
Code deteriorates over time, unless there is a focused effort to clear technical debt, and refactoring. This is true irrespective of the language.
Tools can help only to an extent. But a combination of tools and tests often give enough hints on various smells. It takes a bit of experience to catch them in a timely fashion, particularly to understand each smell's significance and impact.
You are wanting to solve a people problem with software? Prepare for a world of pain!
The way to solve the problem is to make sure that you have ways of working with people that you don't end up with those kinds of problems.... Pair Programming / Review. Induction of people when they first come onto the project, etc.
Having said that, you can write tools that analyse the software and look for common problems. But people are pretty creative and can find all sorts of bizarre ways of doing things.
Just as soon as everything gets locked down according to your satisfaction, new requirements will arrive and you'll have to break through the side of it.
Enforcing such stringency at the programming level with .NET is almost impossible considering a programmer can access all private members through reflection.
Do yourself and favour and schedule regular code reviews, provide education and implement proper training. And, as you said, it will become quickly evident when you can't write unit tests against it.
What about NetArchTest, which is inspired by ArchUnit?
Example:
// Classes in the presentation should not directly reference repositories
var result = Types.InCurrentDomain()
.That()
.ResideInNamespace("NetArchTest.SampleLibrary.Presentation")
.ShouldNot()
.HaveDependencyOn("NetArchTest.SampleLibrary.Data")
.GetResult()
.IsSuccessful;
// Classes in the "data" namespace should implement IRepository
result = Types.InCurrentDomain()
.That().HaveDependencyOn("System.Data")
.And().ResideInNamespace(("ArchTest"))
.Should().ResideInNamespace(("NetArchTest.SampleLibrary.Data"))
.GetResult()
.IsSuccessful;
"This project allows you create tests that enforce conventions for class design, naming and dependency in .Net code bases. These can be used with any unit test framework and incorporated into a build pipeline. "
The other day i stumbled onto a rather old usenet post by Linus Torwalds. It is the infamous "You are full of bull****" post when he defends his choice of using plain C for Git over something more modern.
In particular this post made me think about the enormous amount of abstraction layers that accumulate one over the other where I work. Mine is a Windows .Net environment. I must say that I like C# and the .Net environment, it really makes most things easy.
Now, I come from a very different background made of Unix technologies like C and a plethora or scripting languages; to me, also, OOP is just one, and not always the best, programming paradigm.. I often struggle (in a working kind of way, of course!) with my colleagues (one in particular), because they appear to be of the "any problem can be solved with an additional level of abstraction" church, while I'm more of the "keeping it simple" school. I think that there is a very different mental approach to the problems that maybe comes from the exposure to different cultures.
As a very simple example, for the first project I did here I needed some configuration for an application. I made a 10 rows class to load and parse a txt file to be located in the program's root dir containing colon separated key / value pairs, one per row. It worked.
In the end, to standardize the approach to the configuration problem, we now have a library to be located on every machine running each configured program that calls a service that, at startup, loads up an xml that contains the references to other xmls, one per application, that contain the configurations themselves.
Now, it is extensible and made up of fancy reusable abstractions, providers and all, but I still think that, if we one day really happen to reuse part of it, with the time taken to make it up, we can make the needed code from start or copy / past the old code and modify it.
What are your thoughts about it? Can you point out some interesting reference dealing with the problem?
Thanks
Abstraction makes it easier to construct software and understand how it is put together, but it complicates fully understanding certain issues around performance and security, because the abstraction layers introduce certain kinds of complexity.
Torvalds' position is not absurd, but he is an extremist.
Simple answer: programming languages provide data structures and ways to combine them. Use these directly at first, do not abstract. If you find you have representation invariants to maintain that are at a high risk of being broken due to a large number of usage sites possibly outside your control, then consider abstraction.
To implement this, first provide functions and convert the call sites to use them without hiding the representation. Hide the data representation only when you're satisfied your functional representation is sufficient. Make sure at this time to document the invariant being protected.
An "extreme programming" version of this: do not abstract until you have test cases that break your program. If you think the invariant can be breached, write the case that breaks it first.
Here's a similar question: https://stackoverflow.com/questions/1992279/abstraction-in-todays-languages-excited-or-sad.
I agree with #Steve Emmerson - 'Coders at Work' would give you some excellent perspective on this issue.
I have been tasked with automating some of the paper forms in HR. This might turn into "automate all forms" eventually, so I want to approach this in a way which will be best for the long term and will be a good framework as this project grows.
The first things that come to mind were:
-InfoPath/SharePoint (We currently don't use SharePoint now, and wouldn't be an option for the next two years.)
-Workflow Foundation (I've looked into this and does not seem too attractive or appropriate)
Option I'm considering at this point:
-Custom ASP.NET (VB.NET) & SQL Server, which is what my team mostly writes their apps with.
-Leverage Infopath for creating the forms electronically. Wondering if there is a good approach to integrating this with a custom built ASP.NET app.
-Considering creating the app as an MVC web app.
My question is this:
-Are there other options I might want to consider?
-Are there any starter kits or VB.NET based open source projects there which would be a starting point or could be used as a good reference. Here I'm mostly concerned with the workflow processing.
-Any comnments from those who have gone down this path?
This is going to sound really dumb, but in my many years of helping companies automate paper form-based processes is to understand the process first. You will most likely find that no single person understands the whole thing. You will need to role-play the many paths thru the process to get your head around it. And once you present your findings, everyone will be shocked because they had no idea it was that complex. Use that as an opportunity to streamline.
Automating a broken process only makes it screw up faster and tell a lot of people.
As far as tools, my experience dates me but try to go with something with these properties:
EASY to change. You WILL be changing it. So don't hard-code anything.
Possible revision control - changes to a process may or may not affect documents already in route?
Visual workflow editing. Everyone wants this but they'll all ask you to drive it. Still, nice tools.
Not sure if this helps or not - but 80% of success in automating processes is not technology.
This is slightly off topic, but related - defect tracking systems generally have workflow engines/state. (In fact, I think Joel or some other FC employee posted something about using FB for managing the initial emails and resume process)
I second the other advice about modeling the workflow before doing any coding or technology choices. You will also want this to be flexible.
as n8owl reminded us, automating a mess yields an automated mess - which is not an improvement. Many paper-forms systems have evolved over decades and can be quite redundant and unruly. Some may view "messing with the forms" as a violation of their personal fiefdoms, so watch your back ;-)
model the workflow in terms of the forms used by whom in what roles for what purposes; this documents the current process as a baseline. Get estimates of how long each step takes, both in terms of man-hours and calendar time
understand the workflow in terms of the information gathered, generated, and transmitted
consolidate the information on the forms into a new set of forms for minimal workflow
be prepared to be told "This is the way we've always done it and we're not going to change", and to gently (a) validate their feelings, (b) explain how less work is more efficient, and (c) show concrete benefits [vs.the baseline from step 1]
soft-code when possible; use processing rules when possible; web services and html forms (esp. w/jquery) will go a long way if you have an intranet
beware of canned packages (including sharepoint) unless you are absolutely certain they encompass your organization's current and future needs
good luck!
--S
I detect here a general tone of caution with regards to a workflow based approach and must agree. Be advised about the caveats of most workflow technologies which sacrifice usability for flexibility.
I was using a CASE called MAGIC for a system I'm developing, I've never used this kind of tool before and at first sight I liked, a month later I had a lot of the application generated, I felt very productive and ... I would say ... satisfied.
In some way a felt uncomfortable, cause, there is no code and everything I was used to, but in the other hand I could speed up my developing. The fact is that eventually I returned to use C# because I find it more flexible to develop, I can make unit testing, use CVS, I have access to more resources and basically I had "all the control". I felt that this tool didn't give me confidence and I thought that later in the project I could not manage it due to its forced established rules of development. And also a lot of things like sending emails, using my own controls, and other things had their complication, it seemed that at some point it was not going to be as easy as initially I thought and as initially the product claims. This reminds me a very nice article called "No Silver Bullet".
This CASE had its advantages but on the other hand it doesn't have resources you can consult and actually the license and certification are very expensive. For me another dissapointing thing is that because of its simplistic approach for development I felt scared on first hand cause of my unexperience on these kind of tools and second cause I thought that if I continued using it maybe it would have turned to be a complex monster that I could not manage later in the project.
I think it's good to use these kind of solutions to speed up things but I wonder, why aren't these programs as popular as VS.Net, J2EE, Ruby, Python, etc. if they claim to enhance productivity better than the tools I've pointed?
We use a CASE tool at my current company for code generation and we are trying to move away from it.
The benefits that it brings - a graphical representation of the code making components 'easier' to pick up for new developers - are outweighed by the disadvantges in my opinion.
Those main disadvantages are:
We cannot do automatic merges, making it close to impossible for parallel development on one component.
Developers get dependant on the tool and 'forget' how to handcode.
Just a couple questions for you:
How much productivity do you gain compared to the control that you use?
How testable and reliant is the code you create?
How well can you implement a new pattern into your design?
I can't imagine that there is a CASE out there that I could write a test first and then use a CASE to generate the code I need. I'd rather stick to resharper which can easily do my mundane tasks and retain full control of my code.
The project I'm on originally went w/ the Oracle Development Suite to put together a web application.
Over time (5+ years), customer requirements became more complex than originally anticipated, and the screens were not easily maintainable. So, the team informally decided to start doing custom (hand coded) screens in web PL/SQL, instead of generating them using the Oracle Development Suite CASE tools (Oracle Designer).
The Oracle Report Builder component of the Development Suite is still being used by the team, as it seems to "get the job done" in a timely fashion. In general, the developers using the Report Builder tool are not very comfortable coding.
In this case, it seems that the productivity aspect of such CASE tools is heavily dependent on customer requirements and developer skill sets/training/background.
Unfortunaly the Magic tool doesn't generates code and also it can't implement a design pattern. I don't have control over the code cause as i stated before it doesn't have code to modify. Te bottom line is that it can speed up productivity in some way but it has the impossibility to user CVS, patterns also and I can't control all the details.
I agree with gary when he says "it seems that the productivity aspect of such CASE tools is heavily dependent on customer requirements and developer skill sets/training/background" but also I can't agree more with Klelky;
Those main disadvantages are:
1. We cannot do automatic merges, making it close to impossible for parallel development on one component.
2.Developers get dependant on the tool and 'forget' how to handcode.
Thanks