Related
There seems to be quite a bit of folklore knowledge floating about in restricted circles about the pitfalls of hash-consing combined with marshalling-unmarshalling of data. I am looking for citable references to these tidbits.
For instance, someone once pointed me to library aterm and mentioned that the authors had clearly thought about this and that the representation on disk was bottom-up (children of a node come before the node itself in the data stream). This is indeed the right way to do things when you need to re-share each node (with a possible identical node already in memory). This re-sharing pass needs to be done bottom-up, so the unmarshalling itself might as well be, too, so that it's possible to do everything in a single pass.
I am in the process of describing difficulties encountered in our own context, and the solutions we found. I would appreciate any citable reference to the kind of aforementioned folklore knowledge. Some people obviously have encountered the problems before (the aterm library is only one example). But I didn't find anything in writing. Even the little piece of information I have about aterm is hear-say. I am not worried it's not reliable (you can't make this up), but "personal communication" and "look how it's done in the source code" are considered poor form in citations.
I have enough references on hash-consing alone. I am only interested in references where it interferes with other aspects of programming, such as marshalling or distribution.
OK, this is not much more use, but Andrew Kennedy wrote a functional pearl called simply Pickling Combinators, which appears in the Journal of Functional Programming, (2004), 14:6:727-739. There is extensive discussion of structure sharing and how it is handled in pickles, but no direct discussion of how this problem might relate to hash-consing in the implementation of the language. But the article does discuss structure sharing in memory as well as in a pickle, so I hope it is better than nothing.
Martin Elsman had a follow-on paper in 2005 in Trends in Functional Programming; the title is Type-specialized serialization with sharing. The article deals primarily with hash-consing by the unpickler (deserializer), not with hash-consing in the impelementation, but again it may be worth something.
The JFP paper is proprietary, but there appears to be a preprint on Andrew's web page.
Elsman's paper appears to be available through Google Scholar at http://tinyurl.com/yd5tw2b.
(In a previous life, I worked on a project to create ASCII pickles that people could read and edit. I stupidly failed to publish it, but I have retained an interest.)
I found one reference on marshalling in functional languages; not sure if it will be useful, but the authors are smart: http://tinyurl.com/yc3hob9
I believe that Matthias Blume and/or Andrew Appel did something on this, but I can't find the paper. I also believe I reviewed something once for the Journal of Functional Programming, but I can't remember if the paper was accepted or who wrote it.
I suggest you ask Matthias Blume, Andrew Appel, and Phil Wadler if they can help.
Coq V5.10 had hash-consing and marshaling/unmarshaling. I didn't find anything in published form but the unmarshaling steps would be referenced as "reinterning" in the source code. Coq unmarhsaled values and then traversed them in order to re-create sharing, the obvious and only solution when all the language provides is an unmarshal function of type int_channel -> 'a.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I haven't taken any math classes above basic college calculus. However, in the course of my programming work, I've picked up a lot of math and comp sci from blogs and reading, and I genuinely believe I have a decent mathematical mind. I enjoy and have success doing Project Euler, for example.
I want to dive in and really start learning some cool math, particularly discrete mathematics, set theory, graph theory, number theory, combinatorics, category theory, lambda calculus, etc.
My impression so far is that I'm well equipped to take these on at a conceptual level, but I'm having a really hard time with the mathematical language and symbols. I just don't "speak the language" and though I'm trying to learn it, I'm the going is extremely slow. It can take me hours to work through even one formula or terminology heavy paragraph. And yeah, I can look up terms and definitions, but it's a terribly onerous process that very much obscures the theoretical simplicity of what I'm trying to learn.
I'm really afraid I'm going to have to back up to where I left off, get a mid-level math textbook, and invest some serious time in exercises to train myself in that way of thought. This sounds amazingly boring, though, so I wondered if anyone else has any ideas or experience with this.
If you don't want to attend a class, you still need to get what the class would have given you: time in the material and lots of practice.
So, grab that text book and start doing the practice problems. There really isn't any other way (unless you've figured out how osmosis can actually happen...).
There is no knowledge that can only be gained in a classroom.
Check out the MIT Courseware for Mathematics
Also their YouTube site
Project Euler is also a great way to think about math as it relates to programming
Take a class at your local community college. If you're like me you'd need the structure. There's something to be said for the pressure of being graded. I mean there's so much to learn that going solo is really impractical if you want to have more than just a passing nod-your-head-mm-hmm sort of understanding.
Sounds like you're in the same position I am. What I'm finding out about math education is that most of it is taught incorrectly. Whether a cause or result of this, I also find most math texts are written incorrectly. Exceptions are rare, but notable. For instance, anything written by Donald Knuth is a step in the right direction.
Here are a couple of articles that state the problem quite clearly:
A Gentle Introduction To Learning
Calculus
Developing Your Intuition For
Math
And here's an article on a simple study technique that aims at retaining knowledge:
Teaching linear algebra
Consider auditing classes in discrete mathematics and proofs at a local university. The discrete math class will teach you some really useful stuff (graph theory, combinatorics, etc.), and the proofs class will teach you more about the mathematical style of thinking and writing.
I'd agree with #John Kugelman, classes are the way to go to get it done properly but I'd add that if you don't want to take classes, the internet has many resources to help you, including recorded lectures which I find can be more approachable than books and papers.
I'd recommend checking out MIT Open Courseware. There's a Maths for Computer Science module there, and I'm enjoying working through Gilbert Strang's Linear Algebra course of video lectures.
Youtube and videolectures.com are also good resources for video lectures.
Finally, there's a free Maths for CS book at bookboon.
To this list I would now add The Haskel Road to Logic, Maths, and Programming, and Conceptual Mathematics: A First Introduction to Categories.
--- Nov 16 '09 answer for posterity--
Two books. Diestel's Graph Theory, and Knuth's Concrete Mathematics. Once you get the hang of those try CAGES.
Find a good mentor who is an expert in the field who is willing to spend time with you on a regular basis.
There is a sort of trick to learning dense material, like math and mathematical CS. Learning unfamiliar abstract stuff is hard, and the most effective way to do it is to familiarize yourself with it in stages. First, you need to skim it: don't worry if you don't understand everything in the first pass. Then take a break; after you have rested, go through it again in more depth. Lather, rinse, repeat; meditate, and eventually you may become enlightened.
I'm not sure exactly where I'd start, to become familiar with the language of mathematics; I just ended up reading through lots of papers until I got better at it. You might look for introductory textbooks on formal mathematical logic, since a lot of math (especially in language theory) is based off of that; if you learn to hack the formal stuff a bit, the everyday notation might look a bit easier.
You should probably look through books on topics you're personally interested in; the inherent interest should help get you over the hump. Also, make sure you find texts that are actually introductory; I have become wary of slim, undecorated hardbacks labeled Elementary Foobar Theory, which tend to be elementary only to postdocs with a PhD in Foobar.
A word of warning: do not start out with category theory -- it is the most boring math I have ever encountered! Due to its relevance to language design and type theory, I would like to know more about it, but so far I have not been able to deal...
For a nice, scattershot intro to bits of many kinds of CS-ish math, I recommend Godel, Escher, Bach by Hofstadter (if you haven't read it already, of course). It's not a formal math book, though, so it won't help you with the familiarity problem, but it is quite inspirational.
Mathematical notation is is akin to several computer languages:
concise
exacting
based on many idioms
a fair amount of local variations and conventions
As with a computer language, you don't need to "wash the whole elephant at once": take it one part a at time.
A tentative plan for you could be
identify areas of mathematics that are interesting or important to you. (seems you already have a bit of a sense for that, CS has helped you develop quite a culture for it.)
take (or merely audit) a few formal classes in this area. I agree with several answers in this post, an in-person course, at local college is preferable, but, maybe at first, or to be sure to get the most of a particular class, first self-teaching yourself in this area with MIT OCW, similar online resources and associated books is ok/fine.
if an area of math introduces too high of a pre-requisite in terms of fluency with notation or with some underlying concept or (most often mechanical computation and transformation techniques). No problem! Just backtrack a bit, learn these foundations (and just these foundations!) and move forward again.
Find a "guru", someone that has a broad mathematical culture and exposure, not necessarily a mathematician, physics folks are good too, indeed they can often articulate math in a more practical fashion. Use this guru to guide you, as he/she can show you how the big pieces fit together.
Note: There is little gain to be had of learning mathematical notation for its own sake. Rather it should be learned in context, just like say a C# idiom is better memorized when used and when associated with a specific task, rather than learned in vacuo. A related SO posting however provides several resources to decipher and learn mathematical notation
Project Euler takes problems out of context and drops them in for people to solve them. Project Euler cannot teach you anything effectively. I think you should forget about it, if it is popular it does not mean anything. You cannot study Mathematics through Project Euler as it contains only bits and pieces(and some pretty high level pieces) that you're supposed to know in order to solve the problems. Learning mathematics means to consider a subject and a read a book about it and solving exercices or reading solutions, that's how you learn math. If it so happens that through your reading you find something that is close to some project euler thing, your luck , but otherwise Project euler is a complete waste of time. I think the time is much better invested choosing a particular branch of mathematics and studying that. Let me explain why: I solved 3 pretty advanced Projec Euler problems and they were all making appeal to knowledge from Number theory which I happened to have because i studies some part of it. I do not think Iearned anything from Project Euler, it just happened that I already knew some number theory and solved the problems.
For example, if you find out you like number theory, take H. Davenport -> Hardy & Wright -> Kenneth & Rosen's , study those.
If you like Graph Theory take Reinhard Diestel's book which is freely available and study that(or check books.google.com and find whichever is more appropriate to your taste) but don't spread your attention in 999999 directions just because Project Euler has problems ranging from dynamic programming to advanced geometry or to advanced number theory, that is clearly the wrong way to go and it will not bring you closer to your goal.
This sounds amazingly boring
Well ... Mathematics is not boring when you find some problem that you are attached to, which you like and you'd like to find the solution to, and when you have the sufficient time to reflect on it while not behind a computer screen. Mathematics is done with pen and paper mostly(yes you can use computers .. but that's not really the point).
So, if you find a real-world problem, or some programming problem that would benefit from
you knowing some advanced maths, and you know what maths you have to study , it can be motivating to learn in that way.
If you feel you are not motivated it is hard to study properly.
There is also the question of what you actually mean when you say learn. Does the learning process stop after you solved the problems at the end of the chapter of a book ? Well you decide. You can consider you have finished learning that subject, or you can consider you have not finished and read more about it. There are entire books on just one equation and variations of it.
The amount of programming-related math that you can learn without formal training is limited, but it's more than enough. But maybe you can self-teach yourself.
It all boils down to your resources and motivation.
To know mathematics you have to do mathematics not programming(project euler).
For beginning to learn category theory I recommend David Spivak's Category Theory for the Sciences (AKA Category Theory for Scientists) because its relatively comprehensible due to many examples that enable understanding by analogy and which quickly builds a foundation for understanding more abstract concepts.
It requires the ability to reason logically and an intuitive notion of what is a set. It proceeds from sets and functions through basic category theory to adjoint functors, categories of functors, sheaves, monads and an introduction to operads. Two main threads throughout are modeling databases in terms of categories and describing categories with annotated diagrams called ologs. The bibliography provides references to more advanced and specialized topics including recent papers by Dr. Spivak.
An expected outcome from reading this book is the capability of understanding category theory texts and papers written for mathematicians such as Mac Lane's Category Theory for the Working Mathematician.
In PDF format it is available from http://math.mit.edu/~dspivak/teaching/sp13/ (the dynamic version is recommended since its the most recent). The open access HTML version is available from https://mitpress.mit.edu/books/category-theory-sciences (which is recommended since it includes additional content including answers to some exercises).
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
In my work experience, most fresh out of school programmers are set right to creating reports for 6-12 months or so. While I see the benefit of doing something non-crucial, it seems to really discourage them.
So my question is, should organizations allow newbies to work with someone experienced right off the bat, obviously doing non-critical phases of a project, do get a real feel for what their career choice has in stock, or throw them on reports out of the gate?
Ah, there really is nothing like exploiting interns for remedial jobs...
Seriously though, you get back what you put in. Forcing them to do a thoughtless, thankless job for a long period of time is a quick way to build up a useless team member.
Perhaps they should be looking for a job at different companies? Maybe they shouldn't settle?
I was once a fresh-grad, and I have never been asked to work on a report. I had a programming check-in within the first 5 days of my job.
Maybe I am confused about the question. We are talking about folks who apply for programming positions and are sent to doing "reports" related job?!
I didn't start in "reports". I started on a conversion -- just get stuff to run on the new platform. Relatively safe, minor programming changes.
Then I did some new development for a while.
Then another conversion.
Then -- 2 years into my career -- no longer a complete n00b -- I wound up in "Reports". They wanted something like a dozen dumb-as-dirt accounting reports. Each was a "pull from the general ledger", "do some quick math" and "write a columnar report". [It was 1980, that's how stuff was done.]
I couldn't stand to do copy-and-paste programming. So I wrote a thing that extracted from the ledger into an array of values. It used a flexible notation for doing calculations on values in that array, then it wrote out the results of the calculations.
It could add, subtract, multiply and divide. You could use multiple operations on a series of "cells" to compute wonderfully complex things. To a limit.
I had invented the spreadsheet, built as a COBOL batch program. Seriously. That's what putting someone on reports can lead to. A single program that produced the dozen dumb-as-dirt financial reports. And a large number of additional reports, too.
Bonus. It was built in an Agile, incremental fashion. The first version did a half-dozen of the really easy reports. The next one did two or three more.
I don't think "reports" is a bad gig. What's bad is forcing people to copy and paste yet another dumb-as-dirt report program from a cookie-cutter template.
I believe it to be beneficial. It's what happened to me long ago and it provided me an opportunity to learn the database schema, the domain, and how the data is being used.
But, if they were hired as a Software Engineer they shouldn't be a report writer indefinitely. Programmer/Analyst however...
It's beneficial to the company in the short run, because then you can get useful work out of new graduates. It's harmful to everyone in the long run, because creating reports isn't really that hard, so the newbies don't learn much from doing it.
That being said, 6-12 months is a really long time to stick anybody on doing reports (unless they enjoy it, which most people don't). Maybe a shorter time period would be better training for a new employee.
I've worked in shops that threw a lot at the new hires where the results were mixed and I've worked at shops where they did pointless monkey-business exercises such as writing reports that nobody would read, attending 'process' meetings and open-ended tasks like "read a book about C++" or "learn something about this technology or that one. Both of these approaches were a waste of effort and time.
At my shop, if you are the new guy you aren't going to get left to your own devices to figure out X or to create busy work for yourself. Typically, we'll run you through our products so you are familiar with them as a user, then we'll talk through whatever task it is we need you to do, do the "I'm right over here, tell me if you need assistance" thing and then check up on them during the morning "what are you working on?" meeting. The goal at my shop is to get a developer up to speed as quickly as possible without skipping over the important stuff.
I think the key to successfully developing the new employee, particularly one who may be right out of school is to challenge them, provide them with interesting tasking that will make them not dread coming to work. If you get them interested in the work, you get an employee who becomes valuable. There are some tasks that just aren't interesting, and we all do them at my place. For me, I dread getting anywhere near MS Word to write formal documentation, but that comes with the territory sometimes. The 'new guy' needs to realize it won't always be code slinging or new development. Sometimes it is maintenance coding - much of the time it is. Sometimes it is 'turn the crank' type work. Sometimes it is report writing.
A good manager or senior developer will mentor the new hire. If a shop doesn't do that, I'd probably not want to work there myself.
They should be pair programming (or spectator programming) with different people from their department for a few weeks. Then they get to know all the people, the structure, the code and useful tips.
Reports are a wonderful introduction.
They tend to have very specific specifications, unlike many other projects. They're a good "stand alone" task. They also give the developer a good introduction to the domain model, which they must use to actually get the data out for the report.
Finally, they're (typically) reasonably simple with some reporting framework doing most of the heavy lifting for them. So they need to focus on learning the tools of the trade, deployment, and the data model.
They're a nice gradual introduction to the larger domain and application.
I've never been put on a non-important job as a safety function. Even when I didn't know exactly what I was doing I got put on important projects people wanted yesterday, and then paired with someone who had specific development he/she wanted to offload onto the new-hire.
It works pretty well that way.
If you put a college grad on report-writing duty for a long time, he's going to bail on you. Bad management and a waste of money...
I have two contrasting experiences with Crystal Reports in two different companies:
With my first employer (fresh out of University), our Crystal Reports expert was leaving, so I was asked to take over the role. No actual training was provided, so I had to learn everything on-the-job, with no support from either the Vendor or the Employer. Although my position description was as an IT Developer, I eventually spent 100% of my time working on Crystal Reports. It was an unproductive experience for me, and a waste of manpower and resources.
My current employer asked me to assist another Developer in creating and maintaining their Crystal Reports setup. Because they provided adequate training, and I was mentored in the role, I gained knowledge on multiple systems and databases. I even a little experience at administrating and maintaining SQL Server. And I also got the chance to interact with many different clients in the company, as many different sections of the company needed these reports.
So my answer to the original question is that it really depends on the organization, rather than the central concept. If your employer is intending to use it as a way of familiarizing new employees with multiple systems, then I think it's a great idea. If it's just a short-term way of foisting a thankless (and rotten) job on a hapless new employee, then I think it's a waste of manpower and resources.
The good thing about reports is that they are not updating information so there's no chance that any data will be lost.
Depending on what the tools are for reporting too. When I did reporting, I learned tons about SQL, and stored procedures. Of course that is probably not the norm for reporting.
It depends on the report, and it depends on the job. Many reports are anything but trivial, and excellent SQL skills are needed to create a performant and properly maintainable back-end. If your newbies are good with SQL, let them cut their teeth on the queries. It will be a good way for them to learn the schema of your database.
However, if "putting them on reports" is just a euphamism for them trying in vain for hours without direction or inspiration to format a table in Crystal reports 25 (or whatever the current version is), well, I think you probably already know my answer to that question...
Math skills are becoming more and more essential, and I wonder where is a good place to brush up on some basics before moving on to some more CompSci specific stuff?
A site with lots of video's as well as practice exercises would be a double win but I can't seem to find one.
It depends on your math level. You should start by revising what you should know till that moment and then go further to algorithm mathmatics, geometry (transforms and etc), statistics and more.
There are tons of places on the internet were you can learn:
http://www.math.cornell.edu/Courses/courses.html
http://ocw.mit.edu/OcwWeb/web/courses/courses/index.htm
http://mathworld.wolfram.com/
and the list is open.
I recommend Project Euler if you want to train number theory and discrete maths. Lots of fun exercises, though you need to know a bit of programming.
Steve Yegge had a good blog post Math for programmers
Quoting some of it:
"But a few things I've learned recently might surprise you:
Math is a lot easier to pick up after you know how to program. In fact, if you're a halfway decent programmer, you'll find it's almost a snap.
They teach math all wrong in school. Way, WAY wrong. If you teach yourself math the right way, you'll learn faster, remember it longer, and it'll be much more valuable to you as a programmer.
Knowing even a little of the right kinds of math can enable you do write some pretty interesting programs that would otherwise be too hard. In other words, math is something you can pick up a little at a time, whenever you have free time.
Nobody knows all of math, not even the best mathematicians. The field is constantly expanding, as people invent new formalisms to solve their own problems. And with any given math problem, just like in programming, there's more than one way to do it. You can pick the one you like best.
Math is... ummm, please don't tell anyone I said this; I'll never get invited to another party as long as I live. But math, well... I'd better whisper this, so listen up: (it's actually kinda fun.)"
I will be boring and recommend actually taking university courses in math.
Without lectures and lessons with an assistant I know I would never be able to learn as much as I have. I just need some kind of motivation, since higher math is really hard.
That is, if you are looking for quite advanced stuff and actually want to get a deep understanding and don't want to crunch numbers. Crunching numbers is why we have MATLAB ;)
It would be good to know what level of math you have, and what you want to do with it. But I guess calculus, linear algebra and discrete math are the most useful courses to take.
I suggest books with good tutorials throughout if you're unable to partake in a maths course. For computer science-related maths Don Knuth's Concrete Mathematics is meant to be very good.
Obviously nothing can replace a good teacher, but good tutorials can come pretty damn close. You really get to learn the subject in the tutorials I think.
Get some videos from www.aduni.org
Math courses
It's a couple of years since this question has been asked, but there are a number of new sites and resources available now:
Khan Academy was originally intended for schoolkids, but it has since expanded to include material that would not be out of place in first-year university courses. It serves as a great way to review and fix fundamentals. It has videos and practice exercises, and keeps track of your progress.
EdX is an evolution of initiatives like MIT Open Courseware. It's now an alliance of universities like MIT, Berkeley and Stanford that offer free online university level courses, with video instruction and learning materials. My only complaint is that some of their courses have prerequisites (like single-variable calculus) that you need to pick up elsewhere, like Coursera, or the original MIT OpenCourseWare site.
Coursera offers more courses than EdX, and many of them are more basic, covering topics like pre-algebra and pre-calculus. The learning interface is not quite as cool as EdX's (which offers a scrollable captioning interface alongside most of it's videos), but the broader range of topics and courses covering fundamentals offers learning you just won't find on EdX.
A lot of the universities will actually publish their lecture materials online. So all you really need to do is find a suitable subject and then read the lecture materials and do the associated work. If you were really sneaky you could probably also go to the tutorials to get help :P
BetterExplained.com has some great math lectures. Its not video lectures but the author gives easy-to-understand explanations on math concepts.
Don't forget that iTunes now has available a load of maths lectures (and other subjects) from various mainstream universities - and all for free.
Since you want to brush up your math
I would suggest you to do a G search on UCCS math online
Or follow this link , and after registering yourself free you can browse the archives
I must say that It's common that you will find people recommending course X .
But rarely will you find people completing their recommended course ..
SO IN the case of number theory you must go for the latest course , the last offering has not high quality video ..
Also for Discrete Math ->There are no lecture notes on this site
So you have to figure out how to establish correspondence two online course (6.042 has good P sets and Notes) And The above Math course for Discrete Math .
I would discourage you to use YouTube (x minutes ) tutorials , Because most of them cover Math like History ..
A good course can be found by G searching Harvard OlI--
It has probability (Non Continuous) - There are P sets without solutions ..
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
How do you go about the requirements gathering phase? Does anyone have a good set of guidelines or tips to follow? What are some good questions to ask the stakeholders?
I am currently working on a new project and there are a lot of unknowns. I am in the process of coming up with a list of questions to ask the stakeholders. However I cant help but to feel that I am missing something or forgetting to ask a critical question.
You're almost certainly missing something. A lot of things, probably. Don't worry, it's ok. Even if you remembered everything and covered all the bases stakeholders aren't going to be able to give you very good, clear requirements without any point of reference. The best way to do this sort of thing is to get what you can from them now, then take that and give them something to react to. It can be a paper prototype, a mockup, version 0.1 of the software, whatever. Then they can start telling you what they really want.
See obligatory comic below...
In general, I try and get a feel for the business model my customer/client is trying to emulate with the application they want built. Are we building a glorified forms processor? Are we retrieving data from multiple sources in a single application to save time? Are we performing some kind of integration?
Once the general businesss model is established, I then move to the "must" and "must nots" for the application to dictate what data I can retrieve, who can perform what functions, etc.
Usually if you can get the customer to explain their model or workflow, you can move from there and find additional key questions.
The one question I always make sure to ask in some form or another is "What is the trickiest/most annoying thing you have to do when doing X. Typically the answer to that reveals the craziest business/data rule you'll have to implement.
Hope this helps!
Steve Yegge talks fun but there is money to be made in working out what other people's requirements are so i'd take his article with a pinch of salt.
Requirements gathering is incredibly tough because of the manner in which communication works. Its a four step process that is lossy in each step.
I have an idea in my head
I transform this into words and pictures
You interpret the pictures and words
You paint an image in your own mind of what my original idea was like
And humans fail miserably at this with worrying frequency through their adorable imperfections.
Agile does right in promoting iterative development. Getting early versions out to the client is important in identifying what features are most important (what ships in 0.1 - 0.5 ish), helps to keep you both on the right track in terms of how the application will work and quickly identifies the hidden features that you will miss.
The two main problem scenarios are the two ends of the scales:
Not having a freaking clue about what you are doing - get some domain experts
Having too many requirements - feature pit. - Question, cull (prioritise ;) ) features and use iterative development
Yegge does well in pointing out that domain experts are essential to produce good requirements because they know the business and have worked in it. They can help identify the core desire of the client and will help explain how their staff will use the system and what is important to the staff.
Alternatives and additions include trying to do the job yourself to get into the mindset or having a client staff member occasionally on-site, although the latter is unlikely to happen.
The feature pit is the other side, mostly full of failed government IT projects. Too much, too soon, not enough thought or application of realism (but what do you expect they have only about four years to make themselves feel important?). The aim here is to work out what the customer really wants.
As long as you work on getting the core components correct, efficient and bug-free clients usually remain tolerant of missing features that arrive in later shipments, as long as they eventually arrive. This is where iterative development really helps.
Remember to separate the client's ideas of what the program will be like and what they want the program to achieve.
Some clients can create confusion by communicating their requirements in the form of application features which may be poorly thought out or made redundant by much simpler functionality then they think they require. While I'm not advocating calling the client an idiot or not listening to them I feel that it is worth forever asking why they want a particular feature to get to its underlying purpose.
Remember that in either scenario it is of imperative importantance to root out the quickest path to fulfilling the customers core need and put you in a scenario where you are both profiting from the relationship.
Wow, where to start?
First, there is a set of knowledge someone should have to do analysis on some projects, but it really depends on what you are building for who. In other words, it makes a big difference if you are modifying an enterprise application for a Fortune 100 corporation, building an iPhone app, or adding functionality to a personal webpage.
Second, there are different kinds of requirements.
Objectives: What does the user want to accomplish?
Functional: What does the user need to do in order to reach their objective? (think steps to reach the objective/s)
Non-functional: What are the constraints your program needs to perform within? (think 10 vs 10k simultaneous users, growth, back-up, etc.)
Business rules: What dynamic constraints do you have to meet? (think calculations, definitions, legal concerns, etc.)
Third, the way to gather requirements most effectively, and then get feedback on them (which you will do, right?) is to use models. User cases and user stories are a model of what the user needs to do. Process models are another version of what needs to happen. System diagrams are just another model of how different parts of the program(s) interact. Good data modeling will define business concepts and show you the inputs, outputs, and changes that happen within your program. Models (and there are more than I listed) are really the key to the concern you list. A few good models will capture the needs and from models you can determine your requirements.
Fourth, get feedback. I know I mentioned this already, but you will not get everything right the first time, so get responses to what your customer wants.
As much as I appreciate requirements, and the models that drive them, users typically do not understand the ramifications of of all their requests. Constant communication with chances for review and feedback will give users a better understanding of what you are delivering. Further, they will refine their understanding based on what they see. Unless you're working for the government, iterations and / or prototypes are helpful.
First of all gather the requirements before you start coding. You can begin the design while you are gathering them depending on your project life cicle but you shouldn't ever start coding without them.
Requirements are a set of well written documents that protect both the client and yourself. Never forget that. If no requirement is present then it was not paid for (and thus it requires a formal change request), if it's present then it must be implemented and must work correctly.
Requirements must be testable. If a requirement cannot be tested then it isn't a requirement. That means something like, "The system "
Requirements must be concrete. That means stating "The system user interface shall be easy to use" is not a correct requirment.
In order to actually "gather" the requirements you need to first make sure you understand the businness model. The client will tell you what they want with its own words, it is your job to understand it and interpret it in the right context.
Make meetings with the client while you're developing the requirements. Describe them to the client with your own words and make sure you and the client have the same concept in the requirements.
Requirements require concise, testable example, but keep track of every other thing that comes up in the meetings, diagrams, doubts and try to mantain a record of every meeting.
If you can use an incremental life cycle, that will give you the ability to improve some bad gathered requirements.
You can never ask too many or "stupid" questions. The more questions you ask, the more answers you receive.
According to Steve Yegge that's the wrong question to ask. If you're gathering requirement it's already too late, your project is doomed.
High-level discussions about purpose, scope, limitations of operating environment, size, etc
Audition a single paragraph description of the system, hammer it out
Mock up UI
Formalize known requirements
Now iterate between 3 and 4 with more and more functional prototypes and more specs with more details. Write tests as you go. Do this until you have functional software and a complete, objective, testable requirements spec.
That's the dream. The reality is usually after a couple iterations everybody goes head-down and codes until there's a month left to test.
Gathering Business Requirements Are Bullshit - Steve Yegge
read the agile manifesto - working software is the only measurement for the success of a software project
get familiar with agile software practices - study Scrum , lean programming , xp etc - this will save you tremendous amount of time not only for the requirements gathering but also for the entire software development lifecycle
keep regular discussions with Customers and especially the future users and key-users
make sure you talk to the Persons understanding the problem domain - e.g. specialists in the field
Take small notes during the talks
After each CONVERSATION write an official requirement list and present it for approving. Later on it would be difficult to argue against all agreed documentation
make sure your Customers know approximately what are the approximate expenses in time and money for implementing "nice to have" requirements
make sure you label the requirements as "must have" , "should have" and "nice to have" from the very beginning, ensure Customers understand the differences between those types also
integrate all documents into the latest and final requirements analysis (or the current one for the iteration or whatever agile programming cycle you are using ... )
remember that requirements do change over the software life cycle , so gathering is one thing but managing and implementing another
KISS - keep it as simple as possible
study also the environment where the future system will reside - there are more and more technological restraints from legacy or surrounding systems , since the companies do not prefer to throw to the garbage the money they have invested for decades even if in our modern minds 20 years old code is garbage ...
Like most stages of the software development process its iteration works best.
First find out who your users are -- the XYZ dept,
Then find out where they fit into the organisation -- part of Z division,
Then find out what they do in general terms -- manage cash
Then in specific terms -- collect cash from tills, and check for till fraud.
Then you can start talking to them.
Ask what problem they want you want to solve -- you will get an answer like write a bamboozling system using OCR with shark technoligies.
Ignore that answer and ask some more questions to find out what the real problem is -- they cant read the till slips to reconcile the cash.
Agree a real solution with the users -- get a better ink ribbon supplier - or connect the electronic tills to the network and upload the logs to a central server.
Then agree in detail how they will measure the success of the project.
Then and only then propose and agree a detailed set of requirements.
I would suggest you to read Roger-Pressman's Software Engineering: A Practitioner's Approach
Before you go talking to the stakeholders/users/anyone be sure you will be able to put down the gathered information in a usefull and days-lasting way.
Use a sound-recorder if it is OK with the other person and the information is bulky.
If you heard something important and you need some reasonable time to write it down, you have two choices: ask the other person to wait a second, or say goodbye to that precious information. You wont remember it right, ask any neuro-scientist.
If you detect that a point need deeper review or that you need some document you just heard of, make sure you make a commitment with the other person to send that document or schedule another meeting with a more specific purpose. Never say "I'll remember to ask for that xls file" because in most cases you wont.
Not to long after the meeting, summarize all your notes, recordings and fresh thoughts. Just summarize it rigth. Create effective reminders for the commitments.
Again, just after the meeting, is the perfect time to understand why the gathering you just did was not as right as you thought at the end of the meeting. That's when you will be able to put down a lot of meaningful questions for another meeting.
I know the question was in the perspective of the pre-meeting, but please be aware that you can work on this matters before the meeting and end up with a much usefull, complete and quality gathering.
I've been using mind mapping (like a work breakdown structure) to help gather requirements and define the unknowns (the #1 project killer). Start at a high level and work your way down. You need to work with the sponsors, users and development team to ensure you get all the angles and don't miss anything. You can't be expected to know the entire scope of what they want without their involvement...you - as a project manager/BA - need to get them involved (most important part of the job).
There are some great ideas here already. Here are some requirements gathering principles that I always like to keep in mind:
Know the difference between the user and the customer.
The business owners that approve the shiny project are usually the customers. However, a devastating mistake is the tendency to confuse them as the user. The customer is usually the person that recognizes the need for your product, but the user is the person that will actually be using the solution (and will most likely complain later about a requirement your product did not meet).
Go to more than one person
Because we’re all human, and we tend to not remember every excruciating detail. You increase your likelihood of finding missed requirements as you talk to more people and cross-check.
Avoid specials
When a user asks for something very specific, be wary. Always question the biases and see if this will really make your product better.
Prototype
Don’t wait till launch to show what you have to the user. Do frequent prototypes (you can even call them beta versions) and get constant feedback throughout the development process. You’ll probably find more requirements as you do this.
I recently started using the concepts, standards and templates defined by the International Institute of Business Analysts organization (IIBA).
They have a pretty good BOK (Book of Knowledge) that can be downloaded from their website. They do also have a certificate.
Requirements Engineering is a bit of an art, there are lots of different ways to go about it, you really have to tailor it to your project and the stakeholders involved. A good place to start is with Requirements Engineering by Karl Wiegers:
http://www.amazon.com/Software-Requirements-Second-Pro-Best-Practices/dp/0735618798/ref=pd_bbs_sr_2?ie=UTF8&s=books&qid=1234910330&sr=8-2
and a requirements engineering process which may consist of a number of steps e.g.:
Elicitation - for the basis for discussion with the business
Analysis and Description - a technical description for the purpose of the developers
Elaboration, Clarification, Verification and Negotiation - further refinement of the requirements
Also, there are a number of ways of documenting the requirements (Use Cases, Prototypes, Specifications, Modelling Languages). Each have their advantages and disadvantages. For example prototypes are very good for elicitation of ideas from the business and discussion of ideas.
I generally find that writing a set of use cases and including wireframe prototypes works well to identify an initial set of requirements. From that point it's a continual process of working with technical people and business people to further clarify and elaborate on the requirements. Keeping track of what was initially agreed and tracking additional requirements are essential to avoid scope creep. Negotiation plays a bit part here also between the various parties as per the Broken Iron Triangle (http://www.ambysoft.com/essays/brokenTriangle.html).
IMO the most important first step is to set up a dictornary of domain-specific words. When your client says "order", what does he mean? Something he receives from his customers or something he sends to his suppliers? Or maybe both?
Find the keywords in the stakeholders' business, and let them explain those words until you comprehend their meaning in the process. Without that, you will have a hard time trying to understand the requirements.
i wrote a blog article about the approach i use:
http://pm4web.blogspot.com/2008/10/needs-analysis-for-business-websites.html
basically: questions to ask your client before building their website.
i should add this questionnaire sheet is only geared towards basic website builds - like a business web presence. totally different story if you are talking about web-based software. although some of it is still relavant (e.g. questions relating to look and feel).
LM
I prefer to keep my requirements gathering process as simple, direct and thorough as possible. You can download a sample document that I use as a template for my projects at this blog posting: http://allthingscs.blogspot.com/2011/03/documenting-software-architectural.html