HTTP method for small actions such as a (up)vote - http

The verbs are pretty straightforward for CRUD actions.
What would be the right HTTP verb for only performing an action, something
like an upvote?
Maybe this speaks more to data modeling? Is an upvote a resource or just an attribute? I'm unsure about that. Let's say it does modify the resource directly by calling #upvote on the model.
For example, if I upvote a question here on SO, what verb should be ideally used for that action? I am modifying the resource in a partial manner (PATCH?), but at the same time, I don't want to specify the new value as I could encounter concurrency issues, so this would best be managed by the database. In other words, we want to ask the server to perform an incremental action on a resource. Is that covered by PATCH?
I've seen a similar question asked there, but their case pointed to the creation of a new resource by viewing the job request as an object to be created. Are we in the same case here?
If the PATCH method really would be appropriate, what would it contain?

Maybe this speaks more to data modeling? Is an upvote a resource or just an attribute?
Modelling impacts Implementation
We are usually modelling something from the real world and our choice of representation will seriously affect the capabilities of the developed system. We could implement our vote in two ways: as an attribute on the thing being voted on or as an entity in its own right. The choice will affect how easily we can implement desired features.
Two possible implementations ...
1. Votes as entities
I would model this with a resource which modelled the relationship between the voter and the thing being voted on. Why?
The vote has state:
what was being voted on
who voted,
when did they vote.
was it an up vote or a down vote (you mentioned SO as an example so I include that possibility here)
It is a resource in its own right with interesting behaviour around the votes
maintain a correct count of the votes
prevent multiple up votes / down votes
It can be modelled easily with REST.
I can POST/PUT a new vote, DELETE a previous vote, check my votes with a qualified GET.
The system can ensure that I only vote once - something which would not be easy to do if a simple counter was being maintained.
2. Votes as an attribute
In this implementation, we model the vote as a counter. In this case we have to
Get the entire state of the thing being voted on - maximising the interface between client and server
Update the counter
Put back the updated state - oops, someone already updated the
resource in the meantime!
The server now has no easy way to handle multiple votes from the same person without managing some state 'on the side'. We also have that 'lost update' problem.
Things quickly get complicated.
Final advice
The decision on how you model something should be driven by what you need the system to do.
There is often no correct decision, just the best compromise between effort and value.
Choose a design which most easily implements the most common Use Cases. Common things should be quick and simple to do, uncommon things need only be possible.
Chris

Related

Is this a bad DynamoDB database schema?

After a watching a few videos regarding DynamoDB and its best practices, I decided to give it a try; however, I cannot help but feel what I'm doing may be an anti-pattern. As I understand it, the best practice is to leverage as few tables as possible while also taking advantage of GSIs to do some 'heavy' lifting. Unfortunately, I'm working with a use case that doesn't actually have strictly defined access patterns yet since we're still in early development.
Some early access patterns that we may see are:
Retrieve the number of wins for a particular game: rock paper scissors, boxing, etc. [1 quick lookup]
Retrieve the amount of coins a user has. [1 quick lookup]
Retrieve all the items that someone has purchased (don't care about date). [Not sure?]
Possibly retrieve all the attributes associated with a user (rps wins, box wins, coins, etc). [I genuinely don't know.]
Additionally, there may be 2 operations we will need to complete. For example, if the user wins a particular game they may receive "coins". Effectively, we'll need to add coins to the user "coins" attribute & update their number of wins for the game.
Do you think I should revisit this strategy? Additionally, we'll probably start creating 'logs' associated with various games and each individual play.
Designing a DynamoDB data model without fully understanding your applications access patterns is the anti-pattern.
Take the time to define your entities (Users, Games, Orders, etc), their relationship to one another and your applications key access patterns. This can be hard work when you are just getting started, but it's absolutely critical to do this when working with DynamoDB. How else can we (or you, or anybody) evaluate whether or not you're using DDB correctly?
When I first worked with DDB, I approached the process in a similar way you are describing. I was used to working with SQL databases, where I could define a few tables and rely on the magic of SQL to support my access patterns as my understanding of the application access patterns evolved. I quickly realized this was not going to work if I wanted to use DynamoDB!
Instead, I started from the front-end of my application. I sketched out the different pages in my app and nailed down the most important concepts in my application. Granted, I may not have covered all the access patterns in my application, but the exercise certainly nailed down the minimal access patterns I'd need to have a usable app.
If you need to rapidly prototype your application to get a better understanding of your acecss patterns, consider using the skills you and your team already have. If you already understand data modeling with SQL databses, go with that for now. You can always revisit DynamoDB once you have a better understanding of your access patterns and determine that your application can benefit from using a NoSQL databse.

Ideas to keep user stories small

There seems to competing best practices for user stories that I have not been able to come up with a way to deal with. Mainly:
Keep user stories small.
User stories should deliver customer values and therefore engineering focused item such as refactoring should not be separate user story and should be part of on-going work.
I feel like it's hard to achieve both of these at the same time.
For example, sometimes you need to refactor things. Some code is complex and just takes time. If it's done as part of the feature user story, then that user story will get bigger. But the refactor should not be a user story since it doesn't deliver customer values. You could argue that maybe we shouldn't let the code got checked-in in the first place, but requirement changes and therefore assumption changes so I don't think that's realistic expectation.
Another example could be dealing with starting a new project; we need to setup the repository, the project, the CI/CD pipelines. All of these are infrastructure work items and does not deliver any direct customer values. I guess in this case, we could use "engineer" as the customer but there are some debate out there whether that's a good practice or not.
Now I could bend the rule and have these as user stories. But I am curious if people are following the scrum rule strictly, is there tips and tricks to achieve both of these requirements?
Scrum does not prescribe any format for the Product Backlog items. So your Product Backlog can contain user stories but also any other kind of items.
As you are saying user stories should deliver value, typically meeting the INVEST acronym but your Product Backlog can contain items related to make functionality possible, such as upgrading libraries, implementing a log system or whatever help the team to potentially release the product. Taking into account that in your Product Backlog there will be bugs too, so it´s not just about new functionalities. So don´t fool yourself making up users who don´t exist.
Being saying that, usually a big refactor is a smell of a problem not tackled at the proper time (generating a high technical debt, a weak DoD and so on). That´s the reason many authors say that you should refactor at the same time your are developing new functionalities (your code should be in good form).
Finally, if you are in such a scenario that you need a big refactor, my advice is to try to decompose the refactor in small pieces and create a good harness of test to refactor with confidence.
I can think of several ways you can cut stories. However, keep in mind that these are ideas that can work but can also make no sense depending on the work. I only want to share my experience.
think about the three basic areas: User Interface, Data Calculation, Data Retrieval, try to cut the story along these lines
if you have to do repetitions, create a story for each iteration
try to do the first implementation as basic as possible, then create more stories for upgrading and fine tuning
even if a story seems to be straightforward, try to cut it; it is amazing how much better you can discuss as a team about smaller items and how much better you can refine it, and you it forces you to think about the details and this really can make a difference

Calculating ratings/points in a community driven site

To learn ASP.NET MVC, I am thinking of creating a community forum like SO where people can rate posts, users etc. and the user can thereby gain points. I just can't figure out if the points should be added to the user profile whenever an action is done (post rated up/down, user created new post etc.) or if it should be calculated from the different activities the user has done.
I have a few pro's and con's for both ways of doing it:
Add rating:
Pro:
Easier to implement, and much faster and less resource intensive.
Con:
If the value of the different activities change, you can't do anything about it.
No way of showing a history on how you have gotten your points.
Calculating rating:
Pro:
Much easier to have a point-history for both the user and people viewing the account.
Possibility to change the amount of points for a given activity.
Con:
A little more difficult to implement.
More resource extensive (can be prevented by caching the data, or creating a job which calculates the points).
I think you've pretty much thought of everything. I can just offer some engineering tips. All things equal, always start of with what's easier to implement.
Now there are some cons with that as you say, so they're not equal, they don't offer the same functionality. So can you live without the history? If not, implement calculating first. Your model will be tight and well defined, which is always nice.
IF you determine later on that this is too cpu intensive, only then do you go about fixing it with a cache or a job. Good ideas, both, btw. 90% of the time, unless you really measure it, you'll be laboring on optimizations that are not necessary. Unnecessary optimizations are wrong.
It looks like you are trying to build something like stackoverflow, and Stackoverflow does have a history where your points came from. When you will use linq, the calculation method could be done purely in SQL, without a lot of effort on programming skills. (although it'd be a bit more advanced than the normal linq querys)
I'd go for the second option, merely because it's more interesting, you'll learn more about linq, caching, and MVC overall.
You can use ActionFilter classs to catch every action that adds/deletes user points. Like AuditActionFilter class. This can be done just by putting action filter attribute on top of corresponding methods. In the audit action filter class, you can figure out which method is executed easility using filterContext object and track the progress of points for each user in a flat file or xml, which you can show/parse when he wants to see his history.

Do validators duplicate business logic?

I know it's possible to use validators to check data input in the presentation layer of an application (e.g. regex, required fields etc.), and to show a message and/or required marker icon. Data validation generally belongs in the business layer. How do I avoid having to maintain two sets of validations on data I am collecting?
EDIT: I know that presentation validation is good, and that it informs the user, and that it's not infallible. The fact remains, does it not, that I am checking effectively the same thing in two places ?
Yes, and no.
It depends on the architecture of your application. We'll assume that you're building an n-tier application, since the vast majority of applications these days tend to follow that model.
Validation in the user interface is designed to provide immediate feedback to the end-user of the system to prevent functionality in the lower tiers from ever executing in the first place with invalid inputs. For example, you wouldn't even want to try to contact the Active Directory server without both a user name and a password to attempt authentication. Validation at this point saves you processing time involved in instantiating an object, setting it up, and making an unnecessary round trip to the server to learn something that you could easily tell through simple data inspection.
Validation in your class libraries is another story. Here, you're validating business rules. While it can be argued that validation in the user interface and validation in the class libraries are the same, I would tend to disagree. Business rule validation tends to be far more complex. Your rules in this case may be more nuanced, and may detect things that cannot be gleaned through the user interface. For example, you may enforce a rule that states that the user may execute a method only after all of a class's properties have been properly initialized, and only if the user is a member of a specific user group. Or, you may specify that an object may be modified only if it has not been modified within the last twenty-four hours. Or, you may simply specify that a string value cannot be null or empty.
In my mind, however, properly designed software uses a common mechanism to enforce DRY (if possible) from both the UI and the class library. In most cases, it is possible. (In many cases, the code is so trivial, it's not worth it.)
I don't think client-side (presentation layer) validation is actual, useful validation; rather, it simply notifies the user of any errors the server-side (business layer) validation will find. I think of it as a user interface component rather than an actual validation utility, and as such, I don't think having both violates DRY.
EDIT: Yes, you are doing the same action, but for entirely different reasons. If your only goal is strict adherence to DRY, then you do not want to do both. However, by doing both, while you may be performing the same action, the results of that action are used for different purposes (actually validating the information vs. notifying the user of a problem) , and therefore, performing the same action twice actually results in useful information each time.
I think having good validations at application layer allows multiple benefits.
1. Facilitates unit testing
2. You can add multiple clients without worrying about data consistency.
UI validation can be used as tool to provide quick response times to the end users.
Each validation layer serves a different purpose. The user interface validation is used to discard the bad input. The business logic validation is used to perform the validation based on business rules.
For UI validation you can use RequiredFieldValidators and other validators available in the ASP.NET framework. For business validation you can create a validation engine that validates the object. This can be accomplished by using the custom attributes.
Here is an article which explains how to create a validation framework using custom attributes:
http://highoncoding.com/Articles/424_Creating_a_Domain_Object_Validation_Framework.aspx
Following up on a comment from Fredrik Mörk as an answer, because I don't think the other answers are quite right, and it's important for the question.
At least in a context where the presentation validation can be bypassed, the presentation validations and business validations are doing completely different things.
The business validations protect the application. The presentation validations protect the time of the user, and that's all. They're just another tool to assist the user in producing valid inputs, assuming that the user is acting in good faith. Presentation validations should not be used to protect the business validations from having to do extra work because they can't be relied upon, so you're really just wasting effort if you try to do that.
Because of this, your business validations and presentation validations can look extremely different. For business validations, depending on the complexity of your application / scope of what you're validating at any given time, it may well be reasonable to expect them to cover all cases, and guarantee that invalid input is impossible.
But presentation validations are a moving target, because user experience is a moving target. You can almost always improve user experience beyond what you already have, so it's a question of diminishing returns and how much effort you want to invest.
So in answer to your question, if you want good presentation validation, you may well end up duplicating certain aspects of business logic - and you may well end up doing more than that. But you are not doing the same thing twice. You've done two things - protected your application from bad-faith actors, and provided assistance to good-faith actors to use your system more easily. In contexts where the presentation layer cannot be relied upon, there is no way to reduce this down so that you only perform a task like "only a number please" once.
It's a matter of perspective. You can think of this as "I'm checking that the input is a number twice", or you can think "I've guaranteed that I'm not getting a number, and I've made sure the user realises as early as possible that they're only supposed to enter a number". That's two things, not one, and I'd strongly recommend that mental approach. It'll help keep the purpose of your two different kinds of validations in mind, which should make your validations better.

Requirements Gathering

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
How do you go about the requirements gathering phase? Does anyone have a good set of guidelines or tips to follow? What are some good questions to ask the stakeholders?
I am currently working on a new project and there are a lot of unknowns. I am in the process of coming up with a list of questions to ask the stakeholders. However I cant help but to feel that I am missing something or forgetting to ask a critical question.
You're almost certainly missing something. A lot of things, probably. Don't worry, it's ok. Even if you remembered everything and covered all the bases stakeholders aren't going to be able to give you very good, clear requirements without any point of reference. The best way to do this sort of thing is to get what you can from them now, then take that and give them something to react to. It can be a paper prototype, a mockup, version 0.1 of the software, whatever. Then they can start telling you what they really want.
See obligatory comic below...
In general, I try and get a feel for the business model my customer/client is trying to emulate with the application they want built. Are we building a glorified forms processor? Are we retrieving data from multiple sources in a single application to save time? Are we performing some kind of integration?
Once the general businesss model is established, I then move to the "must" and "must nots" for the application to dictate what data I can retrieve, who can perform what functions, etc.
Usually if you can get the customer to explain their model or workflow, you can move from there and find additional key questions.
The one question I always make sure to ask in some form or another is "What is the trickiest/most annoying thing you have to do when doing X. Typically the answer to that reveals the craziest business/data rule you'll have to implement.
Hope this helps!
Steve Yegge talks fun but there is money to be made in working out what other people's requirements are so i'd take his article with a pinch of salt.
Requirements gathering is incredibly tough because of the manner in which communication works. Its a four step process that is lossy in each step.
I have an idea in my head
I transform this into words and pictures
You interpret the pictures and words
You paint an image in your own mind of what my original idea was like
And humans fail miserably at this with worrying frequency through their adorable imperfections.
Agile does right in promoting iterative development. Getting early versions out to the client is important in identifying what features are most important (what ships in 0.1 - 0.5 ish), helps to keep you both on the right track in terms of how the application will work and quickly identifies the hidden features that you will miss.
The two main problem scenarios are the two ends of the scales:
Not having a freaking clue about what you are doing - get some domain experts
Having too many requirements - feature pit. - Question, cull (prioritise ;) ) features and use iterative development
Yegge does well in pointing out that domain experts are essential to produce good requirements because they know the business and have worked in it. They can help identify the core desire of the client and will help explain how their staff will use the system and what is important to the staff.
Alternatives and additions include trying to do the job yourself to get into the mindset or having a client staff member occasionally on-site, although the latter is unlikely to happen.
The feature pit is the other side, mostly full of failed government IT projects. Too much, too soon, not enough thought or application of realism (but what do you expect they have only about four years to make themselves feel important?). The aim here is to work out what the customer really wants.
As long as you work on getting the core components correct, efficient and bug-free clients usually remain tolerant of missing features that arrive in later shipments, as long as they eventually arrive. This is where iterative development really helps.
Remember to separate the client's ideas of what the program will be like and what they want the program to achieve.
Some clients can create confusion by communicating their requirements in the form of application features which may be poorly thought out or made redundant by much simpler functionality then they think they require. While I'm not advocating calling the client an idiot or not listening to them I feel that it is worth forever asking why they want a particular feature to get to its underlying purpose.
Remember that in either scenario it is of imperative importantance to root out the quickest path to fulfilling the customers core need and put you in a scenario where you are both profiting from the relationship.
Wow, where to start?
First, there is a set of knowledge someone should have to do analysis on some projects, but it really depends on what you are building for who. In other words, it makes a big difference if you are modifying an enterprise application for a Fortune 100 corporation, building an iPhone app, or adding functionality to a personal webpage.
Second, there are different kinds of requirements.
Objectives: What does the user want to accomplish?
Functional: What does the user need to do in order to reach their objective? (think steps to reach the objective/s)
Non-functional: What are the constraints your program needs to perform within? (think 10 vs 10k simultaneous users, growth, back-up, etc.)
Business rules: What dynamic constraints do you have to meet? (think calculations, definitions, legal concerns, etc.)
Third, the way to gather requirements most effectively, and then get feedback on them (which you will do, right?) is to use models. User cases and user stories are a model of what the user needs to do. Process models are another version of what needs to happen. System diagrams are just another model of how different parts of the program(s) interact. Good data modeling will define business concepts and show you the inputs, outputs, and changes that happen within your program. Models (and there are more than I listed) are really the key to the concern you list. A few good models will capture the needs and from models you can determine your requirements.
Fourth, get feedback. I know I mentioned this already, but you will not get everything right the first time, so get responses to what your customer wants.
As much as I appreciate requirements, and the models that drive them, users typically do not understand the ramifications of of all their requests. Constant communication with chances for review and feedback will give users a better understanding of what you are delivering. Further, they will refine their understanding based on what they see. Unless you're working for the government, iterations and / or prototypes are helpful.
First of all gather the requirements before you start coding. You can begin the design while you are gathering them depending on your project life cicle but you shouldn't ever start coding without them.
Requirements are a set of well written documents that protect both the client and yourself. Never forget that. If no requirement is present then it was not paid for (and thus it requires a formal change request), if it's present then it must be implemented and must work correctly.
Requirements must be testable. If a requirement cannot be tested then it isn't a requirement. That means something like, "The system "
Requirements must be concrete. That means stating "The system user interface shall be easy to use" is not a correct requirment.
In order to actually "gather" the requirements you need to first make sure you understand the businness model. The client will tell you what they want with its own words, it is your job to understand it and interpret it in the right context.
Make meetings with the client while you're developing the requirements. Describe them to the client with your own words and make sure you and the client have the same concept in the requirements.
Requirements require concise, testable example, but keep track of every other thing that comes up in the meetings, diagrams, doubts and try to mantain a record of every meeting.
If you can use an incremental life cycle, that will give you the ability to improve some bad gathered requirements.
You can never ask too many or "stupid" questions. The more questions you ask, the more answers you receive.
According to Steve Yegge that's the wrong question to ask. If you're gathering requirement it's already too late, your project is doomed.
High-level discussions about purpose, scope, limitations of operating environment, size, etc
Audition a single paragraph description of the system, hammer it out
Mock up UI
Formalize known requirements
Now iterate between 3 and 4 with more and more functional prototypes and more specs with more details. Write tests as you go. Do this until you have functional software and a complete, objective, testable requirements spec.
That's the dream. The reality is usually after a couple iterations everybody goes head-down and codes until there's a month left to test.
Gathering Business Requirements Are Bullshit - Steve Yegge
read the agile manifesto - working software is the only measurement for the success of a software project
get familiar with agile software practices - study Scrum , lean programming , xp etc - this will save you tremendous amount of time not only for the requirements gathering but also for the entire software development lifecycle
keep regular discussions with Customers and especially the future users and key-users
make sure you talk to the Persons understanding the problem domain - e.g. specialists in the field
Take small notes during the talks
After each CONVERSATION write an official requirement list and present it for approving. Later on it would be difficult to argue against all agreed documentation
make sure your Customers know approximately what are the approximate expenses in time and money for implementing "nice to have" requirements
make sure you label the requirements as "must have" , "should have" and "nice to have" from the very beginning, ensure Customers understand the differences between those types also
integrate all documents into the latest and final requirements analysis (or the current one for the iteration or whatever agile programming cycle you are using ... )
remember that requirements do change over the software life cycle , so gathering is one thing but managing and implementing another
KISS - keep it as simple as possible
study also the environment where the future system will reside - there are more and more technological restraints from legacy or surrounding systems , since the companies do not prefer to throw to the garbage the money they have invested for decades even if in our modern minds 20 years old code is garbage ...
Like most stages of the software development process its iteration works best.
First find out who your users are -- the XYZ dept,
Then find out where they fit into the organisation -- part of Z division,
Then find out what they do in general terms -- manage cash
Then in specific terms -- collect cash from tills, and check for till fraud.
Then you can start talking to them.
Ask what problem they want you want to solve -- you will get an answer like write a bamboozling system using OCR with shark technoligies.
Ignore that answer and ask some more questions to find out what the real problem is -- they cant read the till slips to reconcile the cash.
Agree a real solution with the users -- get a better ink ribbon supplier - or connect the electronic tills to the network and upload the logs to a central server.
Then agree in detail how they will measure the success of the project.
Then and only then propose and agree a detailed set of requirements.
I would suggest you to read Roger-Pressman's Software Engineering: A Practitioner's Approach
Before you go talking to the stakeholders/users/anyone be sure you will be able to put down the gathered information in a usefull and days-lasting way.
Use a sound-recorder if it is OK with the other person and the information is bulky.
If you heard something important and you need some reasonable time to write it down, you have two choices: ask the other person to wait a second, or say goodbye to that precious information. You wont remember it right, ask any neuro-scientist.
If you detect that a point need deeper review or that you need some document you just heard of, make sure you make a commitment with the other person to send that document or schedule another meeting with a more specific purpose. Never say "I'll remember to ask for that xls file" because in most cases you wont.
Not to long after the meeting, summarize all your notes, recordings and fresh thoughts. Just summarize it rigth. Create effective reminders for the commitments.
Again, just after the meeting, is the perfect time to understand why the gathering you just did was not as right as you thought at the end of the meeting. That's when you will be able to put down a lot of meaningful questions for another meeting.
I know the question was in the perspective of the pre-meeting, but please be aware that you can work on this matters before the meeting and end up with a much usefull, complete and quality gathering.
I've been using mind mapping (like a work breakdown structure) to help gather requirements and define the unknowns (the #1 project killer). Start at a high level and work your way down. You need to work with the sponsors, users and development team to ensure you get all the angles and don't miss anything. You can't be expected to know the entire scope of what they want without their involvement...you - as a project manager/BA - need to get them involved (most important part of the job).
There are some great ideas here already. Here are some requirements gathering principles that I always like to keep in mind:
Know the difference between the user and the customer.
The business owners that approve the shiny project are usually the customers. However, a devastating mistake is the tendency to confuse them as the user. The customer is usually the person that recognizes the need for your product, but the user is the person that will actually be using the solution (and will most likely complain later about a requirement your product did not meet).
Go to more than one person
Because we’re all human, and we tend to not remember every excruciating detail. You increase your likelihood of finding missed requirements as you talk to more people and cross-check.
Avoid specials
When a user asks for something very specific, be wary. Always question the biases and see if this will really make your product better.
Prototype
Don’t wait till launch to show what you have to the user. Do frequent prototypes (you can even call them beta versions) and get constant feedback throughout the development process. You’ll probably find more requirements as you do this.
I recently started using the concepts, standards and templates defined by the International Institute of Business Analysts organization (IIBA).
They have a pretty good BOK (Book of Knowledge) that can be downloaded from their website. They do also have a certificate.
Requirements Engineering is a bit of an art, there are lots of different ways to go about it, you really have to tailor it to your project and the stakeholders involved. A good place to start is with Requirements Engineering by Karl Wiegers:
http://www.amazon.com/Software-Requirements-Second-Pro-Best-Practices/dp/0735618798/ref=pd_bbs_sr_2?ie=UTF8&s=books&qid=1234910330&sr=8-2
and a requirements engineering process which may consist of a number of steps e.g.:
Elicitation - for the basis for discussion with the business
Analysis and Description - a technical description for the purpose of the developers
Elaboration, Clarification, Verification and Negotiation - further refinement of the requirements
Also, there are a number of ways of documenting the requirements (Use Cases, Prototypes, Specifications, Modelling Languages). Each have their advantages and disadvantages. For example prototypes are very good for elicitation of ideas from the business and discussion of ideas.
I generally find that writing a set of use cases and including wireframe prototypes works well to identify an initial set of requirements. From that point it's a continual process of working with technical people and business people to further clarify and elaborate on the requirements. Keeping track of what was initially agreed and tracking additional requirements are essential to avoid scope creep. Negotiation plays a bit part here also between the various parties as per the Broken Iron Triangle (http://www.ambysoft.com/essays/brokenTriangle.html).
IMO the most important first step is to set up a dictornary of domain-specific words. When your client says "order", what does he mean? Something he receives from his customers or something he sends to his suppliers? Or maybe both?
Find the keywords in the stakeholders' business, and let them explain those words until you comprehend their meaning in the process. Without that, you will have a hard time trying to understand the requirements.
i wrote a blog article about the approach i use:
http://pm4web.blogspot.com/2008/10/needs-analysis-for-business-websites.html
basically: questions to ask your client before building their website.
i should add this questionnaire sheet is only geared towards basic website builds - like a business web presence. totally different story if you are talking about web-based software. although some of it is still relavant (e.g. questions relating to look and feel).
LM
I prefer to keep my requirements gathering process as simple, direct and thorough as possible. You can download a sample document that I use as a template for my projects at this blog posting: http://allthingscs.blogspot.com/2011/03/documenting-software-architectural.html

Resources