As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Does the functional spec help or hinder your expectations? Do programmers who practice the waterfall methodology, are they open to functional specs? I'm a web designer/developer working with a team of 5 programmers and would like to write a functional spec to explain what we need, so that when I begin my work - I know we are working towards the same goal.
I won't start any freelance project until I've got a design spec and functional spec written up and signed off. There's too much room for rogue clients to nickel and dime you to death if you don't have it. The functional spec allows you to stay on target/focused and gives you a natural check list to work to.
If there's no functional spec then you get all the "what ifs" starting to creep in and developers thinking - you know, this would be useful, and it'll only take me an hour. Sure an hour to code the prototype and get it basically working - plus the day to design all the tests and make sure all test cases are covered, then another couple of days to iron out all the bugs, then time to write the documentation. There's far too much room for what seems like a trivial addition to be inserted when there's no spec. You've no doubt heard of the infamous "scope creep". There's also far too much room for clients to say "that's not what I wanted..." when you deliver it and try and wriggle out of paying you.
If you've got the design spec and the functional spec written up ahead of the development and both you and the client have signed off that your understanding of not only the basic details but all the nuances of the language used is one and the same - only then can the real work begin.
There are a couple of anecdotes out there the first is quite true, while the other is a common misconception:
Software development is only 15% about the code, the rest is resource/people management.
It takes 20% of the time to complete the first 80% of the project and the remaining 80% of the time to complete the last 20%.
The misconception is that a working prototype is 80% of the way there - don't be fooled, it is not. So it's easy for a client to say "what's taking so long, I thought you were almost done!" and then quibble that they're paying too much for something that should've been finished months ago. Some of the design methodologies out there really lend themselves well to this popular misconception. The waterfall design methodology is one of them if it's not used correctly.
My view is make sure your understanding is the same, both sign off. Set milestones and make the client very aware at the outset that prototypes are a long way from the completion of the project and set expectations right from the outset as to what those milestones are and when the client can expect to see them delivered.
For project managers of development teams documentation and expectations are everything. You can't live without it, it's the only form of recourse you have against "That's not what I said" or "That's not what I meant" and ergo "I'm not paying you".
I learned from a lot of mistakes made by far more qualified developers than me and saw what it did to them. The two most important documents for your project are the design spec and the functional spec. Don't leave home without them or it can [or most likely will] come back and bite you in the ass.
Addendum re: User Stories:
An additional note about user stories is that they're great for what they are. They should be included in your functional specification but shouldn't be your functional specification.
A user story only describes a single task. It is very lightweight and doesn't contain excessive detail. As the common recommendations go, it should fit on a 3x5 card...if you as a project manager handed me a 3x5 card and told me to write a piece of software based on what I read, no doubt it would be handed to the user at the end and they'd tell the project manager that's not what they wanted.
You need a far greater level of detail in a functional spec. It shouldn't be limited to basic workflows. A functional spec is a whole bunch of user stories along with notes on interpretation of those user stories, improvements that can be made to them, common tasks that can be combined to improve efficiency. The list goes on.
So user stories are a good beginning, but they're not a replacement for a functional spec, they're bullet points in a functional spec.
I work with mostly the Waterfall model, and solely with functional specs. When working on my own (where I can set my own model and program any way I want) I start by writing up functional specs and then implementing them. It gives me a much better idea of the size and scope of the work, helps me estimate the time involved, and helps ensure that I don't miss anything.
Also, you can pass this document to:
Users so that they can make their requirements clear
Developers to create the functionality
Testers to make sure they are testing the right thing
Architects so that they can analyze the requirements
Using functional requirements over user stories is a matter of preference and the scope of the project. If you have a small user base, then you may be able to get away with user stories (which outline various event sequences the user might do), but for larger projects, you should use functional requirements as they have more detail and lead to fewer misunderstandings. Think of them as a means of communication with all people involved in the project.
Frankly, the Functional Specifications should already be part of your Big-M (Waterfall) methodology. Your functional specification is WHAT you are going to build; not necessarily how you are going to build it (which would be your detailed design/specification and the next step in the waterfall).
If you haven't written one yet, stop what you are doing and write one. You are just waste time if you do otherwise. You can start here with Joel's article.
It took me more than 10 years to get it beat into my head to write a functional spec before doing any code. Now I will write one for anything taking more than a day to write. The level of detail, and level of assumptions should be as much as needed to clearly define what needs to be done and communicate it to others (or remind yourself), anything beyond that is a waste.
Others prefer User Stories ... which is fine too, as long as you do some kind of planning.
Another way to accomplish this is using user stories
I find well-written functional specs very useful. A well organized functional specification can also help organize your tests (many-to-many mapping from individual requirements to test cases).
<p style="tongue: in-cheek">They also prove useful for finger-pointing in larger organizations (The requirements were inaccurate! The implementation didn't follow the requirement! QA didn't properly test this requirement! etc.)</p>
I'll second Codeslave's reference to Painless Functional Specification. It's a good series of articles on specifications. See also this Stackoverflow post for a discussion on what content to put into functional specs.
I've done a few large projects, including one with some hundereds of person-years of total effort. As a project team gets larger the number of informal communication channels goes up with a quadratic upper bound. Without a spec this informal communication mechanism is the only way things can get communicated. With a spec, the communication channels approach a hub-and-spokes, making the growth more like a linear function of the project team size.
A spec is really the only scalable way to to get the team 'singing off the same hymn sheet'. There should be some review and negotiation about the spec, but ultimately someone has to take ownership of this to avoid the project becoming a free-for-all.
I think they're a lovely idea, and should be tried.
Just a few comments on some of the answers here...
First of all, I do believe that a good spec document is important for any moderatly complex requirement (and definitly for highly complex ones). But make sure it is a good spec i.e. don't just state the obvious (like one poster already mentioned) but also don't leave out those parts that may seem trivial to you (or even the developers) since you might have more knowledge of that part of the system than some others involved (e.g. testers or documenters) which will appreciate the otherwise "missing bits".
And if your spec is good, it will get read - in my experience (and I've written and read lots of specs over the last years) it's the bad specs that get dumped and the good ones that get followed.
Concerning user stories (or sometimes also called use cases): I think these are important to get an idea of the scenario, but they usually can't replace the details (e.g. screen mockups, how where and if a feature is configurable, data model, migration issues etc.) so you'll probably need both for more complex requirements.
In my experience, functional specs have a fine line between not saying enough and saying too much.
If they don't say enough, then they leave areas open to misunderstanding.
If they say too much, they don't leave enough "wiggle room" to improve the solution.
And they should always be open to a process of revision.
It depends on the functional specification. I've had functional specifications where the writer knew the system inside and out, and wrote the specification as such, and I've had other writers write it with just what they expected to see as a user.
With the former, it's easier because I know exactly what I need to write and where I need to write it, but it limits how easily I can refactor the existing code, since estimates took into account just this feature, and not refactoring existing code that it touches.
With the latter, I have freedom to refactor (so long as end functionality is the same), but if it's a system I'm unfamiliar with, the time estimate is harder to make.
Overall, I love functional specifications -- I also like to remind people that they're written on paper that bends, and as such, should be flexible.
Whether you call them functional specs, business requirements, or user stories, they are all very beneficial to the development process.
The issue with them comes when they are chiseled in stone and used as a device to pass blame around when the system utimately doesn't fit with the user's real needs. I prefer to use the functionals or requirements as a starting point for an iterative process not as the bible for exactly how the system will look when it is complete. Things change, and users typically don't have an understanding of what they want until they have something (a working prototype possibly) in their hands that they can work with instead of conceptualizing on a piece of paper how it will function in the real world. The most successful projects I've implemented were ones where the development team and the users were closely aligned and were able to rapidly turn around changes instead of holding people to what they wanted on a piece of paper six months ago.
Of course this process wouldn't work if you were in a fixed-bid type of situation as one of the earlier answers pointed out.
One interesting substitute for a func spec or user stories that I have seen advocated is to write a user manual for the software first.
Even if this is only notional (i.e. if you do not intend to ship any manual - which you probably shouldn't as nobody will read it), it can be a useful and reasonably lightweight way to reach a common understanding of what the software will do and look like. And it forces you to do the design up front.
I've found that even if you write Functional Specs a lot of the detail is sometimes lost on the person you are trying to address. Now if a Functional Spec is written along with some kind of UI mockup, that makes a huge difference. UI mockups not only show the user what you are envisioning, it also triggers users into remembering things they wouldn't have thought off in the first place.
I have seen and written many specs, some were very good, most weren't. The main thing that they all had in common is that they were never followed. They all had cobwebs on them by the 3rd day of coding.
Write the spec if you want, the best time to do it is at the end of the project. It will be useless for developers to follow but it will at least be an accurate representation of what was done (I know- not really a spec). But don't expect the spec to get the code written for you. That is the real job. If you don't know what to write talk to your stakeholders and users and get some good user stories and then get on with it.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
After investigating a little bit scrum and kanban, I finally read this answer and decided to start using kanban, picking something from scrum (note that I'm working mostly by myself, and I do have read this question and its answers).
Now, my question is: which tool would be best to get started?
whiteboard and postit
agilezen.com
JIRA with greenhopper
a spreadsheet (possibly on Google Docs)
brightgreenprojects.com
Agilo
Target Process
something else (please specify)
Notes about each:
I would lean towards the whiteboard, but there are several drawbacks (e.g. cannot make automatic charts, time measurements, metrics, and sometimes I work from home - where I need it most - and it's not convenient to carry :-)
I don't want to remember another username/password (I promised to myself to signup only to OpenID-enabled services)
My employer has JIRA but my group doesn't use it - I might ask for an account (it shouldn't require another password) and maybe later involve the rest of the group. But I don't know if they are using greenhopper and if it's a big deal installing it.
I generally hate spreadsheets
maybe overkill?
I'd be happy to have a localhost instance, but it could be problematic to give access to the whole group (per network/firewalls) - not a deal-breaker but surely a concern
What I'd like to get from this?
being more productive
tracking how much time I spend in any given task, possibly discussing the issue with my supervisor
tracking what "blocks" me most often
immediately see where I am compared to my schedule
manage in a better way my long todo list (e.g. answering faster to the "what I should do next?" question)
Do you have any suggestion?
Note on the scrumish tag: read the Henrik Kniberg's PDF. He first introduced the definition of scrumish on page 9.
If I may, I think that you are on the wrong path. Anything else than 1. or 4. is overkill and pretty much useless for a non distributed team. So for a team of one person...
Seriously, if you can avoid using a web based application, just do it. First, unless you are already mastering Scrum/Kaban, you need to learn the process, not a tool. Don't let a tool dictate the process. Then most web based tools are just too much click intensive, less easy and fast to update, less transparent/visible than a spreadsheet and a physical board. They are really 2nd category options.
So, I'd go for a spreadsheet and a physical board combo. If you need some charts (I'm still wondering what kind of chart/metrics you want to generate and what value they provide), a spreadsheet is the ideal tool (but honestly, you don't need any tool to draw a burndown). If you need to work from home, take the spreadsheet (or use google docs) and post its with you. Let's be objective, the impediments you mentioned are actually not real.
Last thing, if you had chosen the simplest thing that can possibly work, you would already be doing Scrum, Scrumban or whatever. So, instead of looking for a tool, my advice would be to just start doing it.
agilezen.com seems like the ideal solution for you. I have used it in the past solo for myself and it is convenient. I would not let a prejudice against non-OpenID sites get in the way of making a good choice.
pick the tool you already have, and start using it; don't let the absence of the "perfect tool" become an excuse not to start
EDIT: pick the simplest thing that can possibly work. In your case that would be whiteboard and postit notes. These have almost no setup overhead and will provide a constant visual reminder of what you're supposed to be doing.
And I suggest that you get used to making decisions on your own, as you're going to have to be your own Scrum Master ;-)
In the interests of diversity ;-) www.kanbantool.com has just launched too. It's open beta and seems at first glance even more "lightweight" than agilezen.
Target process is good too
We've been using JIRA with Greenhopper for a few months, in an effort to go agile. I use it for both Scrum for development, as well as for my personal kanban. The software is pretty flexible, and I'm going to keep with it. The story/subtask management is really handy, and it's fairly easy to use. One thing I like is that you can add stories/cards quickly, and can customize the data. This allowed me to add definition of done fields, work order numbers, etc.
In short, we're happy with it.
Bright Green have just launched a free version of their tool. It looks good .. the free version is fully functional too: https://signup.brightgreenprojects.com/plan/Free
I've tried out another kanban product for personal use and am absolutely loving this one. Feels lightweight and simple but actually packs in a fair amount of functionality at the same time.
www.kanbanery.com (free for personal use)
A novel tool not mentioned before is getsmartQ (out of beta since Dec 2010)
Key characteristics: very intuitive, supports LWP, customizable forms, and task discussions
Pros
configurable workflow, mark overdue stories, mail notifications (e.g., for upcoming story deadlines), multiple story owners
Stories form completely customizable, per project workflow and story forms, different roles (people only creating stories)
very responsive GUI with partial updates
Apparently good support: I've asked a question and got a good answer within a few hours
Cons
no easy way to declare something as blocked or to distinguish type (feature/bug/..)
no API
no subtasks or story dependencies
In comparison to Agilezen it has a more sophisticated notification system, but apart from that still lacks important features (see cons above).
Note, getsmartQ is under active development and hence missing features mentioned above may have been implemented in the meantime.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
There are (at least) two ways that technical debts make their way into projects. The first is by conscious decision. Some problems just are not worth tackling up front, so they are consciously allowed to accumulate as technical debt. The second is by ignorance. The people working on the project don't know or don't realize that they are incurring a technical debt. This question deals with the second. Are there technical debts that you let into your project that would have been trivial to keep out ("If I had only known...") but once they were embedded in the project, they became dramatically more costly?
Ignoring security problems entirely.
Cross-site scripting is one such example. It's considered harmless until you get alert('hello there!') popping up in the admin interface (if you're lucky - script may as well silently copy all data admins have access to, or serve malware to your customers).
And then you need 500 templates fixed yesterday. Hasty fixing will cause data to be double-escaped, and won't plug all vulnerabilities.
Storing dates in a database in local timezone. At some point, your application will be migrated to another timezone and you'll be in trouble. If you ever end up with mixed dates, you'll never be able to untangle them. Just store them in UTC.
One example of this is running a database in a mode that does not support Unicode. It works right up until the time that you are forced to support Unicode strings in your database. The migration path is non-trivial, depending on your database.
For example, SQL Server has a fixed maximum row length in bytes, so when you convert your columns to Unicode strings (NCHAR, NVARCHAR, etc.) there may not be enough room in the table to hold the data that you already have. Now, your migration code must make a decision about truncation or you must change your table layout entirely. Either way, it's much more work than just starting with all Unicode strings.
Unit Testing -- I think that failing to write tests as you go incurs a HUGE debt that is hard to make up. Although I am a fan of TDD, I don't really care if you write your tests before or after you implement the code... just as long as you keep your tests synced with your code.
Not starting a web project off using a javascript framework and hand implementing stuff that was already available. Maintaining the hand written javascript became enough of a pain that I ended up ripping it all out and redoing it with with the framework.
I really struggle with this one, trying to balance YAGNI versus "I've been burned on this once too often"
My list of things I review on every application:
Localization:
Is Time Zone ever going to be important? If yes, persist date/times in UTC.
Are messages/text going to be localized? If yes, externalize messages.
Platform Independence? Pick an easily ported implementation.
Other areas where technical debt can be incurred include:
Black-Hole Data collection: Everything goes in, nothing ever goes out. (No long-term plan for archiving/deleting old data)
Failure to keep MVC or tiers cleanly separated over the application lifetime - for example, allowing too much logic to creep into the View, making adding an interface for mobile devices or web services much more costly.
I'm sure there will be others...
Scalability - in particular data-driven business applications. I've seen more than once where all seems to run fine, but when the UAT environment finally gets stood up with database table sizes that approach productions, then things start falling down right and left. It's easy for an online screen or batch program to run when the db is basically holding all rows in memory.
At a previous company they used and forced COM for stuff it wasn't needed for.
Another company with a C++ codebase didn't allow STL. (WTF?!)
Another project I was on made use of MFC just for the collections - No UI was involved. That was bad.
The ramifications of course for those decisions were not great. In two cases we had dependencies on pitiful MS technologies for no reason and the other forced people to use worse implementations of generics and collections.
I classify these as "debt" since we had to make decisions and trade-offs later on in the projects due to the idiotic decisions up front. Most of the time we had to work around the shortcomings.
While not everyone may agree, I think that the largest contributor to technical debt is starting from the interface of any type of application and working down in the stack. I have come to learn that there is less chance of deviation from project goals by implementing a combination of TDD and DDD, because you can still develop and test core functionality with the interface becoming the icing.
Granted, it isn't a technical debt in itself, but I have found that top-down development is more of an open doorway that is inviting to decisions that are not well thought out - all for the sake of doing something the "looks cool". Also, I understand that not everyone will agree or feel the same way about it, so your mileage might vary on this one. Team dynamics and skills are a part of this equation, as well.
The cliche is that premature optimization is the root of all evil, and this certainly is true for micro-optimization. However, completely ignoring performance at a design level in an area where it clearly will matter can be a bad idea.
Not having a cohesive design up front tends to lead to it. You can overcome it to a degree if you take the time to refactor frequently, but most people keep bashing away at an overall design that does not match their changing requirements. This may be a more general answer that what your looking for, but does tend to be one of the more popular causes of technical debt.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
When first approaching a project is best to step back and think through everything or just dive in and start coding and polish at a later date? Essentially, do you design first or try to rapidly prototype?
I have been burned by both methods, sometimes I try and think everything through but when I actually get down to the nitty gritty I encounter problems that I didn't take in consideration, and sometimes when I code first I end with code that needs to redone to fit in with a better overall design. Alot of my problems stem from inexperience, but any advice is welcome.
Go incrementally and iteratively.
Design a bit, implement a bit.
Starting with a design you can suffer from a tunnel effect where you cannot have any real feedback before you actually implement something.
Starting without design, you can take decisions you'll regret.
The ideal situation is to be able to implement a very skeletal end-to-end version of your system that can be tested, and demonstrated to the customer.
It is always safer to design first, but this does not mean prototyping does not work. The real problem with prototyping is resisting the urge to keep the code you already wrote instead of throwing it away when the time comes to do the design.
There is no silver bullet. It seems like design first is the preferred approach. But you will not be able to predict all complications that can arise while implementing your design. Some of them could potentially be show stoppers. Plus, if you're writing for a client, it's good to be able to show something just to make sure that you're on the same page.
At my workplace we do both - we do a rapid prototype, just to get feedback and get an idea of any potential problems. Then we do a formal design and formal implementation. In most cases we are able to salvage a lot of code from the prototyping stage. I like this approach, since we usually end up with clean, maintainable code.
See Gall's Law. The key is to iterate: design a little, implement a little, test a little, then repeat until you (or your customers) are satisfied. This is the essence of the new breed of "agile" methodologies.
It depends.
Prototyping is most useful when the requirements or a solution aren't necessarily clear. As an example, I am doing a data warehousing project in an environment (large commercial insurance) where financial reconciliation is a big deal. This project has involved a large prototyping exercise to get a system that will reconcile to the financials. As the business rules surrounding this were not well documented, the prototype was instrumental in exposing all of the corner cases.
In other cases, a design-first approach might be more appropriate. This is most applicable where requirements and a sensible solution architecture are reasonably obvious.
You must have some idea of a cohesive architecture before you start working. This is especially true of large scale systems.
Prototyping could be used for particular aspects of the design, e.g. presentation layer.
I think it depends on what kind of business requirements you have up-front. If they are (relatively) detailed and complete, then I'd design based on those requirements. If you have barely anything to work with in the beginning, then prototype out and show your customer what you got, to receive further requirement info.
You should develop using Agile Methodologies. Simply put, you design has you go. The team together with the product owner define a list of topics to develop, order them by importance, and split the development in iterations. Each iteration as features to be developed and on the start of the iteration is design each feature.
See more here.
When first approaching a project, prototype. But don't prototype everything. Prototype one important thing (one "use case" if that means anything) and "turn the inner eye to follow its path" - keep an eye out for the practical problems you encounter in trying to get that one thing done.
Now that you have some idea what it takes to do an important thing, you can design from more than just first principles.
Of course, this assumes you're working in an environment where you can turn out prototypes at minimal cost to ongoing development efforts. But if you're working in such an environment, pepper your design discussions liberally with prototypes. With any luck you may get to keep some of them.
note that agile methods are not an excuse to avoid designing, they just encourage testing of the design more frequently, and in smaller increments
i like to sketch the design and break its elements down until reasonably sure that there are no obvious unknowns or risks; unknowns and risks are highlighted for 'spike' projects, with a time-box for determining feasibility and notes on possible alternatives if the preferred methods prove unworkable
once comfortable with the overall architecture, jump into the features bottom-up (or in priority order) to complete the design, write the initial tests, then implement
EDIT: note that the question "design or prototype first" is making a bad assumption, i.e. that it is possible to prototype without doing any design, which of course is not the case (unless you are using the million-monkeys methodology)
Design first, unless you're willing to take the risk of throwing out all the work put into your prototype when you find it can't do what you need it to do. At a minimum, you should make some high level designs for your project that can help you make some decisions about how you're going to build your prototype so that you will have a minimum of wasted effort.
If I know what I want to build, I just go right to design.
If I'm building something for a client, then I prototype to ease out more specific requirements from the users.
Maybe not an answer but a suggestion from my experience.
In most cases I'd be better off if I had started coding earlier. You can design until the cow comes home, but if the cows are on the horizon when you start coding, you might find your careful design hard to implement in time.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
We prototype a design, GUI, just to analyze a particular problem, proof of concept, etc. Sometimes we throw away the prototype, and sometimes it ends up in the production code. We use different languages, technologies, strategies, and styles to prototype.
What are the different situations you prototype usually and how do you prototype? Any good resource out there to master the craft?
One hot title is Effective Prototyping for Software Makers. The issue is that there are several schools of thought.
Rapid Prototyping. Use fancy tools; get
something done soon.
Evolutionary Prototyping. Evolve from prototype to production.
Some of this is legacy thinking, based on an era where tools were primitive and projects had to be meticulously planned from the beginning. When I started in this industry, the "green-screen" character-mode applications where rocket science and very painful to mock up. Tools and formal techniques were essential to manage the costs and risks.
This thinking is trumped by some more recent thinking.
Powerful tools remove the need for complex prototypes. HTML mockups can be slapped together quickly. Is it still a prototype when you barely have to budget or plan for it?
[You can mock it up in MS-Word and save it as HTML. It's quicker for a Business Analyst to do it than to specify it and have a programmer do it.]
Also, powerful tools can reduce the cost of mistakes. If it only took a week to put something together -- production-ready -- what's the point of an formal prototype effort?
Agile techniques reduce the need to do quite so much detailed up-front planning. When you put something that works in the hands of users quickly, you don't have quite so much need to be sure every nuance is right before you start. It just has to be good enough to consider it progress.
What can happen is the following. [The hidden question is: is this still "prototyping" -- or is this just an Agile approach with powerful tools?]
Using tools like Django, you can put together the essential, core data structure and exercise it almost immediately. Use the default Django admin pages and you should be up and running as soon as you can articulate the data structures and write load utilities.
Then, add presentation pages wrapped around real, working data. Be sure you've got things right. Since you've only built data model and template-driven HTML pages, your investment is minimal. Explore.
Iterate until people start asking for smarter transactions than those available in the default admin pages. At this point, you're moving away from "discovery" and "elaboration" and into "construction". Did you do any prototyping? I suppose each HTML template you discarded was a kind of prototype. For that matter, so where the ones you kept.
The whole time, you can be working with more-or-less live, production users.
Personally, I believe a true prototype should not be much more than diagrams drawn on paper to demonstrate the flow of whatever it is you are trying to achieve. You can then use these documented flows to run through several scenarios to see if it works with whoever has requested the functionality.
Once the paper prototype has been modified to a point where it works then use it as a basis to start coding properly.
The benefits of this process are that you can't end up actually using the prototype code in production because there isn't any. Also, it is much easier to test it with business experts as there is not any code for them to understand.
Right now I just draw pictures. I would like to do more, but to get something to a point where the users would understand any better than a picture would cost to much time.
I am interested in seeing some of these responses :)
I should mention where I work is just me and one other guy to play the roles of project manager (collect data, design spec & app), dbas, coders, tool researcher/developer, et al that comes with the job of making an app for a small company.
For webapps, start with an pure (x)HTML + CSS mock-up, and then use a framework that makes it easy to implement the functionality.
Template-based frameworks are very good for this, but we've had some good experiences with JSF + Facelets + Seam, too.
The main reason for doing prototypes is reducing risk. Thus, we do UI prototypes, which are really not very helpful unless they actually do something that a user can play around with. Just as important, we also do prototypes to either prove that something can work or figure out how something does work.
I start off making a prototype that makes the most interesting part work, then I throw it away and move on to a new, more interesting project...
*kills self*
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
How do you go about the requirements gathering phase? Does anyone have a good set of guidelines or tips to follow? What are some good questions to ask the stakeholders?
I am currently working on a new project and there are a lot of unknowns. I am in the process of coming up with a list of questions to ask the stakeholders. However I cant help but to feel that I am missing something or forgetting to ask a critical question.
You're almost certainly missing something. A lot of things, probably. Don't worry, it's ok. Even if you remembered everything and covered all the bases stakeholders aren't going to be able to give you very good, clear requirements without any point of reference. The best way to do this sort of thing is to get what you can from them now, then take that and give them something to react to. It can be a paper prototype, a mockup, version 0.1 of the software, whatever. Then they can start telling you what they really want.
See obligatory comic below...
In general, I try and get a feel for the business model my customer/client is trying to emulate with the application they want built. Are we building a glorified forms processor? Are we retrieving data from multiple sources in a single application to save time? Are we performing some kind of integration?
Once the general businesss model is established, I then move to the "must" and "must nots" for the application to dictate what data I can retrieve, who can perform what functions, etc.
Usually if you can get the customer to explain their model or workflow, you can move from there and find additional key questions.
The one question I always make sure to ask in some form or another is "What is the trickiest/most annoying thing you have to do when doing X. Typically the answer to that reveals the craziest business/data rule you'll have to implement.
Hope this helps!
Steve Yegge talks fun but there is money to be made in working out what other people's requirements are so i'd take his article with a pinch of salt.
Requirements gathering is incredibly tough because of the manner in which communication works. Its a four step process that is lossy in each step.
I have an idea in my head
I transform this into words and pictures
You interpret the pictures and words
You paint an image in your own mind of what my original idea was like
And humans fail miserably at this with worrying frequency through their adorable imperfections.
Agile does right in promoting iterative development. Getting early versions out to the client is important in identifying what features are most important (what ships in 0.1 - 0.5 ish), helps to keep you both on the right track in terms of how the application will work and quickly identifies the hidden features that you will miss.
The two main problem scenarios are the two ends of the scales:
Not having a freaking clue about what you are doing - get some domain experts
Having too many requirements - feature pit. - Question, cull (prioritise ;) ) features and use iterative development
Yegge does well in pointing out that domain experts are essential to produce good requirements because they know the business and have worked in it. They can help identify the core desire of the client and will help explain how their staff will use the system and what is important to the staff.
Alternatives and additions include trying to do the job yourself to get into the mindset or having a client staff member occasionally on-site, although the latter is unlikely to happen.
The feature pit is the other side, mostly full of failed government IT projects. Too much, too soon, not enough thought or application of realism (but what do you expect they have only about four years to make themselves feel important?). The aim here is to work out what the customer really wants.
As long as you work on getting the core components correct, efficient and bug-free clients usually remain tolerant of missing features that arrive in later shipments, as long as they eventually arrive. This is where iterative development really helps.
Remember to separate the client's ideas of what the program will be like and what they want the program to achieve.
Some clients can create confusion by communicating their requirements in the form of application features which may be poorly thought out or made redundant by much simpler functionality then they think they require. While I'm not advocating calling the client an idiot or not listening to them I feel that it is worth forever asking why they want a particular feature to get to its underlying purpose.
Remember that in either scenario it is of imperative importantance to root out the quickest path to fulfilling the customers core need and put you in a scenario where you are both profiting from the relationship.
Wow, where to start?
First, there is a set of knowledge someone should have to do analysis on some projects, but it really depends on what you are building for who. In other words, it makes a big difference if you are modifying an enterprise application for a Fortune 100 corporation, building an iPhone app, or adding functionality to a personal webpage.
Second, there are different kinds of requirements.
Objectives: What does the user want to accomplish?
Functional: What does the user need to do in order to reach their objective? (think steps to reach the objective/s)
Non-functional: What are the constraints your program needs to perform within? (think 10 vs 10k simultaneous users, growth, back-up, etc.)
Business rules: What dynamic constraints do you have to meet? (think calculations, definitions, legal concerns, etc.)
Third, the way to gather requirements most effectively, and then get feedback on them (which you will do, right?) is to use models. User cases and user stories are a model of what the user needs to do. Process models are another version of what needs to happen. System diagrams are just another model of how different parts of the program(s) interact. Good data modeling will define business concepts and show you the inputs, outputs, and changes that happen within your program. Models (and there are more than I listed) are really the key to the concern you list. A few good models will capture the needs and from models you can determine your requirements.
Fourth, get feedback. I know I mentioned this already, but you will not get everything right the first time, so get responses to what your customer wants.
As much as I appreciate requirements, and the models that drive them, users typically do not understand the ramifications of of all their requests. Constant communication with chances for review and feedback will give users a better understanding of what you are delivering. Further, they will refine their understanding based on what they see. Unless you're working for the government, iterations and / or prototypes are helpful.
First of all gather the requirements before you start coding. You can begin the design while you are gathering them depending on your project life cicle but you shouldn't ever start coding without them.
Requirements are a set of well written documents that protect both the client and yourself. Never forget that. If no requirement is present then it was not paid for (and thus it requires a formal change request), if it's present then it must be implemented and must work correctly.
Requirements must be testable. If a requirement cannot be tested then it isn't a requirement. That means something like, "The system "
Requirements must be concrete. That means stating "The system user interface shall be easy to use" is not a correct requirment.
In order to actually "gather" the requirements you need to first make sure you understand the businness model. The client will tell you what they want with its own words, it is your job to understand it and interpret it in the right context.
Make meetings with the client while you're developing the requirements. Describe them to the client with your own words and make sure you and the client have the same concept in the requirements.
Requirements require concise, testable example, but keep track of every other thing that comes up in the meetings, diagrams, doubts and try to mantain a record of every meeting.
If you can use an incremental life cycle, that will give you the ability to improve some bad gathered requirements.
You can never ask too many or "stupid" questions. The more questions you ask, the more answers you receive.
According to Steve Yegge that's the wrong question to ask. If you're gathering requirement it's already too late, your project is doomed.
High-level discussions about purpose, scope, limitations of operating environment, size, etc
Audition a single paragraph description of the system, hammer it out
Mock up UI
Formalize known requirements
Now iterate between 3 and 4 with more and more functional prototypes and more specs with more details. Write tests as you go. Do this until you have functional software and a complete, objective, testable requirements spec.
That's the dream. The reality is usually after a couple iterations everybody goes head-down and codes until there's a month left to test.
Gathering Business Requirements Are Bullshit - Steve Yegge
read the agile manifesto - working software is the only measurement for the success of a software project
get familiar with agile software practices - study Scrum , lean programming , xp etc - this will save you tremendous amount of time not only for the requirements gathering but also for the entire software development lifecycle
keep regular discussions with Customers and especially the future users and key-users
make sure you talk to the Persons understanding the problem domain - e.g. specialists in the field
Take small notes during the talks
After each CONVERSATION write an official requirement list and present it for approving. Later on it would be difficult to argue against all agreed documentation
make sure your Customers know approximately what are the approximate expenses in time and money for implementing "nice to have" requirements
make sure you label the requirements as "must have" , "should have" and "nice to have" from the very beginning, ensure Customers understand the differences between those types also
integrate all documents into the latest and final requirements analysis (or the current one for the iteration or whatever agile programming cycle you are using ... )
remember that requirements do change over the software life cycle , so gathering is one thing but managing and implementing another
KISS - keep it as simple as possible
study also the environment where the future system will reside - there are more and more technological restraints from legacy or surrounding systems , since the companies do not prefer to throw to the garbage the money they have invested for decades even if in our modern minds 20 years old code is garbage ...
Like most stages of the software development process its iteration works best.
First find out who your users are -- the XYZ dept,
Then find out where they fit into the organisation -- part of Z division,
Then find out what they do in general terms -- manage cash
Then in specific terms -- collect cash from tills, and check for till fraud.
Then you can start talking to them.
Ask what problem they want you want to solve -- you will get an answer like write a bamboozling system using OCR with shark technoligies.
Ignore that answer and ask some more questions to find out what the real problem is -- they cant read the till slips to reconcile the cash.
Agree a real solution with the users -- get a better ink ribbon supplier - or connect the electronic tills to the network and upload the logs to a central server.
Then agree in detail how they will measure the success of the project.
Then and only then propose and agree a detailed set of requirements.
I would suggest you to read Roger-Pressman's Software Engineering: A Practitioner's Approach
Before you go talking to the stakeholders/users/anyone be sure you will be able to put down the gathered information in a usefull and days-lasting way.
Use a sound-recorder if it is OK with the other person and the information is bulky.
If you heard something important and you need some reasonable time to write it down, you have two choices: ask the other person to wait a second, or say goodbye to that precious information. You wont remember it right, ask any neuro-scientist.
If you detect that a point need deeper review or that you need some document you just heard of, make sure you make a commitment with the other person to send that document or schedule another meeting with a more specific purpose. Never say "I'll remember to ask for that xls file" because in most cases you wont.
Not to long after the meeting, summarize all your notes, recordings and fresh thoughts. Just summarize it rigth. Create effective reminders for the commitments.
Again, just after the meeting, is the perfect time to understand why the gathering you just did was not as right as you thought at the end of the meeting. That's when you will be able to put down a lot of meaningful questions for another meeting.
I know the question was in the perspective of the pre-meeting, but please be aware that you can work on this matters before the meeting and end up with a much usefull, complete and quality gathering.
I've been using mind mapping (like a work breakdown structure) to help gather requirements and define the unknowns (the #1 project killer). Start at a high level and work your way down. You need to work with the sponsors, users and development team to ensure you get all the angles and don't miss anything. You can't be expected to know the entire scope of what they want without their involvement...you - as a project manager/BA - need to get them involved (most important part of the job).
There are some great ideas here already. Here are some requirements gathering principles that I always like to keep in mind:
Know the difference between the user and the customer.
The business owners that approve the shiny project are usually the customers. However, a devastating mistake is the tendency to confuse them as the user. The customer is usually the person that recognizes the need for your product, but the user is the person that will actually be using the solution (and will most likely complain later about a requirement your product did not meet).
Go to more than one person
Because we’re all human, and we tend to not remember every excruciating detail. You increase your likelihood of finding missed requirements as you talk to more people and cross-check.
Avoid specials
When a user asks for something very specific, be wary. Always question the biases and see if this will really make your product better.
Prototype
Don’t wait till launch to show what you have to the user. Do frequent prototypes (you can even call them beta versions) and get constant feedback throughout the development process. You’ll probably find more requirements as you do this.
I recently started using the concepts, standards and templates defined by the International Institute of Business Analysts organization (IIBA).
They have a pretty good BOK (Book of Knowledge) that can be downloaded from their website. They do also have a certificate.
Requirements Engineering is a bit of an art, there are lots of different ways to go about it, you really have to tailor it to your project and the stakeholders involved. A good place to start is with Requirements Engineering by Karl Wiegers:
http://www.amazon.com/Software-Requirements-Second-Pro-Best-Practices/dp/0735618798/ref=pd_bbs_sr_2?ie=UTF8&s=books&qid=1234910330&sr=8-2
and a requirements engineering process which may consist of a number of steps e.g.:
Elicitation - for the basis for discussion with the business
Analysis and Description - a technical description for the purpose of the developers
Elaboration, Clarification, Verification and Negotiation - further refinement of the requirements
Also, there are a number of ways of documenting the requirements (Use Cases, Prototypes, Specifications, Modelling Languages). Each have their advantages and disadvantages. For example prototypes are very good for elicitation of ideas from the business and discussion of ideas.
I generally find that writing a set of use cases and including wireframe prototypes works well to identify an initial set of requirements. From that point it's a continual process of working with technical people and business people to further clarify and elaborate on the requirements. Keeping track of what was initially agreed and tracking additional requirements are essential to avoid scope creep. Negotiation plays a bit part here also between the various parties as per the Broken Iron Triangle (http://www.ambysoft.com/essays/brokenTriangle.html).
IMO the most important first step is to set up a dictornary of domain-specific words. When your client says "order", what does he mean? Something he receives from his customers or something he sends to his suppliers? Or maybe both?
Find the keywords in the stakeholders' business, and let them explain those words until you comprehend their meaning in the process. Without that, you will have a hard time trying to understand the requirements.
i wrote a blog article about the approach i use:
http://pm4web.blogspot.com/2008/10/needs-analysis-for-business-websites.html
basically: questions to ask your client before building their website.
i should add this questionnaire sheet is only geared towards basic website builds - like a business web presence. totally different story if you are talking about web-based software. although some of it is still relavant (e.g. questions relating to look and feel).
LM
I prefer to keep my requirements gathering process as simple, direct and thorough as possible. You can download a sample document that I use as a template for my projects at this blog posting: http://allthingscs.blogspot.com/2011/03/documenting-software-architectural.html