As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
There are (at least) two ways that technical debts make their way into projects. The first is by conscious decision. Some problems just are not worth tackling up front, so they are consciously allowed to accumulate as technical debt. The second is by ignorance. The people working on the project don't know or don't realize that they are incurring a technical debt. This question deals with the second. Are there technical debts that you let into your project that would have been trivial to keep out ("If I had only known...") but once they were embedded in the project, they became dramatically more costly?
Ignoring security problems entirely.
Cross-site scripting is one such example. It's considered harmless until you get alert('hello there!') popping up in the admin interface (if you're lucky - script may as well silently copy all data admins have access to, or serve malware to your customers).
And then you need 500 templates fixed yesterday. Hasty fixing will cause data to be double-escaped, and won't plug all vulnerabilities.
Storing dates in a database in local timezone. At some point, your application will be migrated to another timezone and you'll be in trouble. If you ever end up with mixed dates, you'll never be able to untangle them. Just store them in UTC.
One example of this is running a database in a mode that does not support Unicode. It works right up until the time that you are forced to support Unicode strings in your database. The migration path is non-trivial, depending on your database.
For example, SQL Server has a fixed maximum row length in bytes, so when you convert your columns to Unicode strings (NCHAR, NVARCHAR, etc.) there may not be enough room in the table to hold the data that you already have. Now, your migration code must make a decision about truncation or you must change your table layout entirely. Either way, it's much more work than just starting with all Unicode strings.
Unit Testing -- I think that failing to write tests as you go incurs a HUGE debt that is hard to make up. Although I am a fan of TDD, I don't really care if you write your tests before or after you implement the code... just as long as you keep your tests synced with your code.
Not starting a web project off using a javascript framework and hand implementing stuff that was already available. Maintaining the hand written javascript became enough of a pain that I ended up ripping it all out and redoing it with with the framework.
I really struggle with this one, trying to balance YAGNI versus "I've been burned on this once too often"
My list of things I review on every application:
Localization:
Is Time Zone ever going to be important? If yes, persist date/times in UTC.
Are messages/text going to be localized? If yes, externalize messages.
Platform Independence? Pick an easily ported implementation.
Other areas where technical debt can be incurred include:
Black-Hole Data collection: Everything goes in, nothing ever goes out. (No long-term plan for archiving/deleting old data)
Failure to keep MVC or tiers cleanly separated over the application lifetime - for example, allowing too much logic to creep into the View, making adding an interface for mobile devices or web services much more costly.
I'm sure there will be others...
Scalability - in particular data-driven business applications. I've seen more than once where all seems to run fine, but when the UAT environment finally gets stood up with database table sizes that approach productions, then things start falling down right and left. It's easy for an online screen or batch program to run when the db is basically holding all rows in memory.
At a previous company they used and forced COM for stuff it wasn't needed for.
Another company with a C++ codebase didn't allow STL. (WTF?!)
Another project I was on made use of MFC just for the collections - No UI was involved. That was bad.
The ramifications of course for those decisions were not great. In two cases we had dependencies on pitiful MS technologies for no reason and the other forced people to use worse implementations of generics and collections.
I classify these as "debt" since we had to make decisions and trade-offs later on in the projects due to the idiotic decisions up front. Most of the time we had to work around the shortcomings.
While not everyone may agree, I think that the largest contributor to technical debt is starting from the interface of any type of application and working down in the stack. I have come to learn that there is less chance of deviation from project goals by implementing a combination of TDD and DDD, because you can still develop and test core functionality with the interface becoming the icing.
Granted, it isn't a technical debt in itself, but I have found that top-down development is more of an open doorway that is inviting to decisions that are not well thought out - all for the sake of doing something the "looks cool". Also, I understand that not everyone will agree or feel the same way about it, so your mileage might vary on this one. Team dynamics and skills are a part of this equation, as well.
The cliche is that premature optimization is the root of all evil, and this certainly is true for micro-optimization. However, completely ignoring performance at a design level in an area where it clearly will matter can be a bad idea.
Not having a cohesive design up front tends to lead to it. You can overcome it to a degree if you take the time to refactor frequently, but most people keep bashing away at an overall design that does not match their changing requirements. This may be a more general answer that what your looking for, but does tend to be one of the more popular causes of technical debt.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
After investigating a little bit scrum and kanban, I finally read this answer and decided to start using kanban, picking something from scrum (note that I'm working mostly by myself, and I do have read this question and its answers).
Now, my question is: which tool would be best to get started?
whiteboard and postit
agilezen.com
JIRA with greenhopper
a spreadsheet (possibly on Google Docs)
brightgreenprojects.com
Agilo
Target Process
something else (please specify)
Notes about each:
I would lean towards the whiteboard, but there are several drawbacks (e.g. cannot make automatic charts, time measurements, metrics, and sometimes I work from home - where I need it most - and it's not convenient to carry :-)
I don't want to remember another username/password (I promised to myself to signup only to OpenID-enabled services)
My employer has JIRA but my group doesn't use it - I might ask for an account (it shouldn't require another password) and maybe later involve the rest of the group. But I don't know if they are using greenhopper and if it's a big deal installing it.
I generally hate spreadsheets
maybe overkill?
I'd be happy to have a localhost instance, but it could be problematic to give access to the whole group (per network/firewalls) - not a deal-breaker but surely a concern
What I'd like to get from this?
being more productive
tracking how much time I spend in any given task, possibly discussing the issue with my supervisor
tracking what "blocks" me most often
immediately see where I am compared to my schedule
manage in a better way my long todo list (e.g. answering faster to the "what I should do next?" question)
Do you have any suggestion?
Note on the scrumish tag: read the Henrik Kniberg's PDF. He first introduced the definition of scrumish on page 9.
If I may, I think that you are on the wrong path. Anything else than 1. or 4. is overkill and pretty much useless for a non distributed team. So for a team of one person...
Seriously, if you can avoid using a web based application, just do it. First, unless you are already mastering Scrum/Kaban, you need to learn the process, not a tool. Don't let a tool dictate the process. Then most web based tools are just too much click intensive, less easy and fast to update, less transparent/visible than a spreadsheet and a physical board. They are really 2nd category options.
So, I'd go for a spreadsheet and a physical board combo. If you need some charts (I'm still wondering what kind of chart/metrics you want to generate and what value they provide), a spreadsheet is the ideal tool (but honestly, you don't need any tool to draw a burndown). If you need to work from home, take the spreadsheet (or use google docs) and post its with you. Let's be objective, the impediments you mentioned are actually not real.
Last thing, if you had chosen the simplest thing that can possibly work, you would already be doing Scrum, Scrumban or whatever. So, instead of looking for a tool, my advice would be to just start doing it.
agilezen.com seems like the ideal solution for you. I have used it in the past solo for myself and it is convenient. I would not let a prejudice against non-OpenID sites get in the way of making a good choice.
pick the tool you already have, and start using it; don't let the absence of the "perfect tool" become an excuse not to start
EDIT: pick the simplest thing that can possibly work. In your case that would be whiteboard and postit notes. These have almost no setup overhead and will provide a constant visual reminder of what you're supposed to be doing.
And I suggest that you get used to making decisions on your own, as you're going to have to be your own Scrum Master ;-)
In the interests of diversity ;-) www.kanbantool.com has just launched too. It's open beta and seems at first glance even more "lightweight" than agilezen.
Target process is good too
We've been using JIRA with Greenhopper for a few months, in an effort to go agile. I use it for both Scrum for development, as well as for my personal kanban. The software is pretty flexible, and I'm going to keep with it. The story/subtask management is really handy, and it's fairly easy to use. One thing I like is that you can add stories/cards quickly, and can customize the data. This allowed me to add definition of done fields, work order numbers, etc.
In short, we're happy with it.
Bright Green have just launched a free version of their tool. It looks good .. the free version is fully functional too: https://signup.brightgreenprojects.com/plan/Free
I've tried out another kanban product for personal use and am absolutely loving this one. Feels lightweight and simple but actually packs in a fair amount of functionality at the same time.
www.kanbanery.com (free for personal use)
A novel tool not mentioned before is getsmartQ (out of beta since Dec 2010)
Key characteristics: very intuitive, supports LWP, customizable forms, and task discussions
Pros
configurable workflow, mark overdue stories, mail notifications (e.g., for upcoming story deadlines), multiple story owners
Stories form completely customizable, per project workflow and story forms, different roles (people only creating stories)
very responsive GUI with partial updates
Apparently good support: I've asked a question and got a good answer within a few hours
Cons
no easy way to declare something as blocked or to distinguish type (feature/bug/..)
no API
no subtasks or story dependencies
In comparison to Agilezen it has a more sophisticated notification system, but apart from that still lacks important features (see cons above).
Note, getsmartQ is under active development and hence missing features mentioned above may have been implemented in the meantime.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Does the functional spec help or hinder your expectations? Do programmers who practice the waterfall methodology, are they open to functional specs? I'm a web designer/developer working with a team of 5 programmers and would like to write a functional spec to explain what we need, so that when I begin my work - I know we are working towards the same goal.
I won't start any freelance project until I've got a design spec and functional spec written up and signed off. There's too much room for rogue clients to nickel and dime you to death if you don't have it. The functional spec allows you to stay on target/focused and gives you a natural check list to work to.
If there's no functional spec then you get all the "what ifs" starting to creep in and developers thinking - you know, this would be useful, and it'll only take me an hour. Sure an hour to code the prototype and get it basically working - plus the day to design all the tests and make sure all test cases are covered, then another couple of days to iron out all the bugs, then time to write the documentation. There's far too much room for what seems like a trivial addition to be inserted when there's no spec. You've no doubt heard of the infamous "scope creep". There's also far too much room for clients to say "that's not what I wanted..." when you deliver it and try and wriggle out of paying you.
If you've got the design spec and the functional spec written up ahead of the development and both you and the client have signed off that your understanding of not only the basic details but all the nuances of the language used is one and the same - only then can the real work begin.
There are a couple of anecdotes out there the first is quite true, while the other is a common misconception:
Software development is only 15% about the code, the rest is resource/people management.
It takes 20% of the time to complete the first 80% of the project and the remaining 80% of the time to complete the last 20%.
The misconception is that a working prototype is 80% of the way there - don't be fooled, it is not. So it's easy for a client to say "what's taking so long, I thought you were almost done!" and then quibble that they're paying too much for something that should've been finished months ago. Some of the design methodologies out there really lend themselves well to this popular misconception. The waterfall design methodology is one of them if it's not used correctly.
My view is make sure your understanding is the same, both sign off. Set milestones and make the client very aware at the outset that prototypes are a long way from the completion of the project and set expectations right from the outset as to what those milestones are and when the client can expect to see them delivered.
For project managers of development teams documentation and expectations are everything. You can't live without it, it's the only form of recourse you have against "That's not what I said" or "That's not what I meant" and ergo "I'm not paying you".
I learned from a lot of mistakes made by far more qualified developers than me and saw what it did to them. The two most important documents for your project are the design spec and the functional spec. Don't leave home without them or it can [or most likely will] come back and bite you in the ass.
Addendum re: User Stories:
An additional note about user stories is that they're great for what they are. They should be included in your functional specification but shouldn't be your functional specification.
A user story only describes a single task. It is very lightweight and doesn't contain excessive detail. As the common recommendations go, it should fit on a 3x5 card...if you as a project manager handed me a 3x5 card and told me to write a piece of software based on what I read, no doubt it would be handed to the user at the end and they'd tell the project manager that's not what they wanted.
You need a far greater level of detail in a functional spec. It shouldn't be limited to basic workflows. A functional spec is a whole bunch of user stories along with notes on interpretation of those user stories, improvements that can be made to them, common tasks that can be combined to improve efficiency. The list goes on.
So user stories are a good beginning, but they're not a replacement for a functional spec, they're bullet points in a functional spec.
I work with mostly the Waterfall model, and solely with functional specs. When working on my own (where I can set my own model and program any way I want) I start by writing up functional specs and then implementing them. It gives me a much better idea of the size and scope of the work, helps me estimate the time involved, and helps ensure that I don't miss anything.
Also, you can pass this document to:
Users so that they can make their requirements clear
Developers to create the functionality
Testers to make sure they are testing the right thing
Architects so that they can analyze the requirements
Using functional requirements over user stories is a matter of preference and the scope of the project. If you have a small user base, then you may be able to get away with user stories (which outline various event sequences the user might do), but for larger projects, you should use functional requirements as they have more detail and lead to fewer misunderstandings. Think of them as a means of communication with all people involved in the project.
Frankly, the Functional Specifications should already be part of your Big-M (Waterfall) methodology. Your functional specification is WHAT you are going to build; not necessarily how you are going to build it (which would be your detailed design/specification and the next step in the waterfall).
If you haven't written one yet, stop what you are doing and write one. You are just waste time if you do otherwise. You can start here with Joel's article.
It took me more than 10 years to get it beat into my head to write a functional spec before doing any code. Now I will write one for anything taking more than a day to write. The level of detail, and level of assumptions should be as much as needed to clearly define what needs to be done and communicate it to others (or remind yourself), anything beyond that is a waste.
Others prefer User Stories ... which is fine too, as long as you do some kind of planning.
Another way to accomplish this is using user stories
I find well-written functional specs very useful. A well organized functional specification can also help organize your tests (many-to-many mapping from individual requirements to test cases).
<p style="tongue: in-cheek">They also prove useful for finger-pointing in larger organizations (The requirements were inaccurate! The implementation didn't follow the requirement! QA didn't properly test this requirement! etc.)</p>
I'll second Codeslave's reference to Painless Functional Specification. It's a good series of articles on specifications. See also this Stackoverflow post for a discussion on what content to put into functional specs.
I've done a few large projects, including one with some hundereds of person-years of total effort. As a project team gets larger the number of informal communication channels goes up with a quadratic upper bound. Without a spec this informal communication mechanism is the only way things can get communicated. With a spec, the communication channels approach a hub-and-spokes, making the growth more like a linear function of the project team size.
A spec is really the only scalable way to to get the team 'singing off the same hymn sheet'. There should be some review and negotiation about the spec, but ultimately someone has to take ownership of this to avoid the project becoming a free-for-all.
I think they're a lovely idea, and should be tried.
Just a few comments on some of the answers here...
First of all, I do believe that a good spec document is important for any moderatly complex requirement (and definitly for highly complex ones). But make sure it is a good spec i.e. don't just state the obvious (like one poster already mentioned) but also don't leave out those parts that may seem trivial to you (or even the developers) since you might have more knowledge of that part of the system than some others involved (e.g. testers or documenters) which will appreciate the otherwise "missing bits".
And if your spec is good, it will get read - in my experience (and I've written and read lots of specs over the last years) it's the bad specs that get dumped and the good ones that get followed.
Concerning user stories (or sometimes also called use cases): I think these are important to get an idea of the scenario, but they usually can't replace the details (e.g. screen mockups, how where and if a feature is configurable, data model, migration issues etc.) so you'll probably need both for more complex requirements.
In my experience, functional specs have a fine line between not saying enough and saying too much.
If they don't say enough, then they leave areas open to misunderstanding.
If they say too much, they don't leave enough "wiggle room" to improve the solution.
And they should always be open to a process of revision.
It depends on the functional specification. I've had functional specifications where the writer knew the system inside and out, and wrote the specification as such, and I've had other writers write it with just what they expected to see as a user.
With the former, it's easier because I know exactly what I need to write and where I need to write it, but it limits how easily I can refactor the existing code, since estimates took into account just this feature, and not refactoring existing code that it touches.
With the latter, I have freedom to refactor (so long as end functionality is the same), but if it's a system I'm unfamiliar with, the time estimate is harder to make.
Overall, I love functional specifications -- I also like to remind people that they're written on paper that bends, and as such, should be flexible.
Whether you call them functional specs, business requirements, or user stories, they are all very beneficial to the development process.
The issue with them comes when they are chiseled in stone and used as a device to pass blame around when the system utimately doesn't fit with the user's real needs. I prefer to use the functionals or requirements as a starting point for an iterative process not as the bible for exactly how the system will look when it is complete. Things change, and users typically don't have an understanding of what they want until they have something (a working prototype possibly) in their hands that they can work with instead of conceptualizing on a piece of paper how it will function in the real world. The most successful projects I've implemented were ones where the development team and the users were closely aligned and were able to rapidly turn around changes instead of holding people to what they wanted on a piece of paper six months ago.
Of course this process wouldn't work if you were in a fixed-bid type of situation as one of the earlier answers pointed out.
One interesting substitute for a func spec or user stories that I have seen advocated is to write a user manual for the software first.
Even if this is only notional (i.e. if you do not intend to ship any manual - which you probably shouldn't as nobody will read it), it can be a useful and reasonably lightweight way to reach a common understanding of what the software will do and look like. And it forces you to do the design up front.
I've found that even if you write Functional Specs a lot of the detail is sometimes lost on the person you are trying to address. Now if a Functional Spec is written along with some kind of UI mockup, that makes a huge difference. UI mockups not only show the user what you are envisioning, it also triggers users into remembering things they wouldn't have thought off in the first place.
I have seen and written many specs, some were very good, most weren't. The main thing that they all had in common is that they were never followed. They all had cobwebs on them by the 3rd day of coding.
Write the spec if you want, the best time to do it is at the end of the project. It will be useless for developers to follow but it will at least be an accurate representation of what was done (I know- not really a spec). But don't expect the spec to get the code written for you. That is the real job. If you don't know what to write talk to your stakeholders and users and get some good user stories and then get on with it.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
In my work experience, most fresh out of school programmers are set right to creating reports for 6-12 months or so. While I see the benefit of doing something non-crucial, it seems to really discourage them.
So my question is, should organizations allow newbies to work with someone experienced right off the bat, obviously doing non-critical phases of a project, do get a real feel for what their career choice has in stock, or throw them on reports out of the gate?
Ah, there really is nothing like exploiting interns for remedial jobs...
Seriously though, you get back what you put in. Forcing them to do a thoughtless, thankless job for a long period of time is a quick way to build up a useless team member.
Perhaps they should be looking for a job at different companies? Maybe they shouldn't settle?
I was once a fresh-grad, and I have never been asked to work on a report. I had a programming check-in within the first 5 days of my job.
Maybe I am confused about the question. We are talking about folks who apply for programming positions and are sent to doing "reports" related job?!
I didn't start in "reports". I started on a conversion -- just get stuff to run on the new platform. Relatively safe, minor programming changes.
Then I did some new development for a while.
Then another conversion.
Then -- 2 years into my career -- no longer a complete n00b -- I wound up in "Reports". They wanted something like a dozen dumb-as-dirt accounting reports. Each was a "pull from the general ledger", "do some quick math" and "write a columnar report". [It was 1980, that's how stuff was done.]
I couldn't stand to do copy-and-paste programming. So I wrote a thing that extracted from the ledger into an array of values. It used a flexible notation for doing calculations on values in that array, then it wrote out the results of the calculations.
It could add, subtract, multiply and divide. You could use multiple operations on a series of "cells" to compute wonderfully complex things. To a limit.
I had invented the spreadsheet, built as a COBOL batch program. Seriously. That's what putting someone on reports can lead to. A single program that produced the dozen dumb-as-dirt financial reports. And a large number of additional reports, too.
Bonus. It was built in an Agile, incremental fashion. The first version did a half-dozen of the really easy reports. The next one did two or three more.
I don't think "reports" is a bad gig. What's bad is forcing people to copy and paste yet another dumb-as-dirt report program from a cookie-cutter template.
I believe it to be beneficial. It's what happened to me long ago and it provided me an opportunity to learn the database schema, the domain, and how the data is being used.
But, if they were hired as a Software Engineer they shouldn't be a report writer indefinitely. Programmer/Analyst however...
It's beneficial to the company in the short run, because then you can get useful work out of new graduates. It's harmful to everyone in the long run, because creating reports isn't really that hard, so the newbies don't learn much from doing it.
That being said, 6-12 months is a really long time to stick anybody on doing reports (unless they enjoy it, which most people don't). Maybe a shorter time period would be better training for a new employee.
I've worked in shops that threw a lot at the new hires where the results were mixed and I've worked at shops where they did pointless monkey-business exercises such as writing reports that nobody would read, attending 'process' meetings and open-ended tasks like "read a book about C++" or "learn something about this technology or that one. Both of these approaches were a waste of effort and time.
At my shop, if you are the new guy you aren't going to get left to your own devices to figure out X or to create busy work for yourself. Typically, we'll run you through our products so you are familiar with them as a user, then we'll talk through whatever task it is we need you to do, do the "I'm right over here, tell me if you need assistance" thing and then check up on them during the morning "what are you working on?" meeting. The goal at my shop is to get a developer up to speed as quickly as possible without skipping over the important stuff.
I think the key to successfully developing the new employee, particularly one who may be right out of school is to challenge them, provide them with interesting tasking that will make them not dread coming to work. If you get them interested in the work, you get an employee who becomes valuable. There are some tasks that just aren't interesting, and we all do them at my place. For me, I dread getting anywhere near MS Word to write formal documentation, but that comes with the territory sometimes. The 'new guy' needs to realize it won't always be code slinging or new development. Sometimes it is maintenance coding - much of the time it is. Sometimes it is 'turn the crank' type work. Sometimes it is report writing.
A good manager or senior developer will mentor the new hire. If a shop doesn't do that, I'd probably not want to work there myself.
They should be pair programming (or spectator programming) with different people from their department for a few weeks. Then they get to know all the people, the structure, the code and useful tips.
Reports are a wonderful introduction.
They tend to have very specific specifications, unlike many other projects. They're a good "stand alone" task. They also give the developer a good introduction to the domain model, which they must use to actually get the data out for the report.
Finally, they're (typically) reasonably simple with some reporting framework doing most of the heavy lifting for them. So they need to focus on learning the tools of the trade, deployment, and the data model.
They're a nice gradual introduction to the larger domain and application.
I've never been put on a non-important job as a safety function. Even when I didn't know exactly what I was doing I got put on important projects people wanted yesterday, and then paired with someone who had specific development he/she wanted to offload onto the new-hire.
It works pretty well that way.
If you put a college grad on report-writing duty for a long time, he's going to bail on you. Bad management and a waste of money...
I have two contrasting experiences with Crystal Reports in two different companies:
With my first employer (fresh out of University), our Crystal Reports expert was leaving, so I was asked to take over the role. No actual training was provided, so I had to learn everything on-the-job, with no support from either the Vendor or the Employer. Although my position description was as an IT Developer, I eventually spent 100% of my time working on Crystal Reports. It was an unproductive experience for me, and a waste of manpower and resources.
My current employer asked me to assist another Developer in creating and maintaining their Crystal Reports setup. Because they provided adequate training, and I was mentored in the role, I gained knowledge on multiple systems and databases. I even a little experience at administrating and maintaining SQL Server. And I also got the chance to interact with many different clients in the company, as many different sections of the company needed these reports.
So my answer to the original question is that it really depends on the organization, rather than the central concept. If your employer is intending to use it as a way of familiarizing new employees with multiple systems, then I think it's a great idea. If it's just a short-term way of foisting a thankless (and rotten) job on a hapless new employee, then I think it's a waste of manpower and resources.
The good thing about reports is that they are not updating information so there's no chance that any data will be lost.
Depending on what the tools are for reporting too. When I did reporting, I learned tons about SQL, and stored procedures. Of course that is probably not the norm for reporting.
It depends on the report, and it depends on the job. Many reports are anything but trivial, and excellent SQL skills are needed to create a performant and properly maintainable back-end. If your newbies are good with SQL, let them cut their teeth on the queries. It will be a good way for them to learn the schema of your database.
However, if "putting them on reports" is just a euphamism for them trying in vain for hours without direction or inspiration to format a table in Crystal reports 25 (or whatever the current version is), well, I think you probably already know my answer to that question...
This is more of a business-oriented programming question that I can't seem to figure out how to resolve. I work with a team of programmers who have been working with BASIC for over 20 years. I was brought in to help write the same software in .NET, only with updates and modern practices. The problem is that I can't seem to get any of the other 3 team members(all BASIC programmers, though one does .NET now as well) to understand how to correctly do a relational database. Here's the thing they won't understand:
We basically have a transaction that keeps track of a customer's tag information. We need to be able to track current transactions and past transactions. In the old system, a flat-file database was used that had one table that contained records with the basic current transaction of the customer, and another transaction that contained all the previous transactions of the customer along with important money information. To prevent redundancy, they would overwrite the current transaction with the history transactions-(the history file was updated first, then the current one.) It's totally unneccessary since you only need one transaction table, but my supervisor or any of my other two co-workers can't seem to understand this. How exactly can I convince them to see the light so that we won't have to do ridiculous amounts of work and end up hitting the datatabse too many times? Thanks for the input!
Firstly I must admit it's not absolutely clear to me from your description what the data structures and logic flows in the existing structures actually are. This does imply to me that perhaps you are not making yourself clear to your co-workers either, so one of your priorities must be to be able explain, either verbally or preferably in writing and diagrams, the current situation and the proposed replacement. Please take this as an observation rather than any criticism of your question.
Secondly I do find it quite remarkable that programmers of 20 years experience do not understand relational databases and transactions. Flat file coding went out of the mainstream a very long time ago - I first handled relational databases in a commercial setting back in 1988 and they were pretty commonplace by the mid-90s. What sector and product type are you working on? It sounds possible to me that you might be dealing with some sort of embedded or otherwise 'unusual' system, in which case you do need to make sure that you don't have some sort of communication issue and you're overlooking a large elephant that hasn't been pointed out to you - you wouldn't be the first 'consultant' brought into a team who has been set up in some manner by not being fed the appropriate information. That said such archaic shops do still exist - one of my current clients systems interfaces to a flat-file based system coded in COBOL, and yes, it is hell to manage ;-)
Finally, if you are completely sure of your ground and you are faced with a team who won't take on board your recommendations - and demonstration code is a good idea if you can spare the time -then you'll probably have to accept the decision gracefully and move one. Myself in this position I would attempt to abstract out the issue - can the database updates be moved into stored procedures for example so the code to update both tables is in the SP and can be modified at a later date to move to your schema without a corresponding application change? Make sure your arguments are well documented and recorded so you can revisit them later should the opportunity arise.
You will not be the first coder who's had to implement a sub-optimal solution because of office politics - use it as a learning experience for your own personal development about handling such situations and commiserate yourself with the thought you'll get paid for the additional work. Often the deciding factor in such arguments is not the logic, but the 'weight of reputation' you yourself bring to the table - it sounds like having been brought in you don't have much of that sort of leverage with your team, so you may have to work on gaining a reputation by exceling at implementing what they do agree to do before you have sufficient reputation in subsequent cases - you need to be modded up first!
Sometimes you can't.
If you read some XP books, they often say that one of your biggest hurdles will be convincing your team to abandon what they have always done.
Generally they will recommend letting people who can't adapt go to other projects (Or just letting them go).
Code reviews might help in your case. Mandatory code reviews of every line of code is not unheard of.
Sometime the best argument is an example. I'd write a prototype (or a replacement if not too much work). With an example to examine it will be easier to see the pros and cons of a relational database.
As an aside, flat-file databases have their places since they are so much easier to "administer" than a true relational database. Keep an open mind. ;-)
I think you may have to lead by example - when people see that the "new" way is less work they will adopt it (as long as you don't rub their noses in it).
I would also ask yourself whether the old design is actually causing a problem or whether it is just aesthetically annoying. It's important to pick your battles - if the old design isn't causing a performance problem or making the system hard to maintain you may want to leave the old design alone.
Finally, if you do leave the old design in place, try and abstract the interface between your new code and the old database so if you do persuade your co-workers to improve the design later you can drop the new schema in without having to change anything else.
It is difficult to extract a whole lot except general frustration from the original question.
Yes, there are a lot of techniques and habits long-timers pick up over time that can be useless and even costly in light of technology changes. Some things that made sense when processing power, memory, and even disk was expensive can be foolish attempts at optimization now. It is also very much the case that people accumulate bad habits and bad programming patterns over time.
You have to be careful though.
Sometimes there are good reasons for the things those old timers do. Sadly, they may not even be able to verbalize the "why" - if they even know why anymore.
I see a lot of this sort of frustration when newbies come into an enterprise software development shop. It can be bad even when the environment is all fairly modern technology and tools. If most of your experience is in writing small-community desktop and Web applications a lot of what you "know" may be wrong.
Often there are requirements for transaction journaling at a level above what your DBMS may do. Quite often it can be necessary to go beyond DB transaction semantics in order to ensure time-sequence correctness, once and only once updating, resiliancy, and non-repudiation.
And this doesn't even begin to address the issues involved in enterprise or inter-enterprise scalability. When you begin to approach half a million complex transactions a day you will find that RDBMS technology fails you. Because relational databases are not designed to handle high transaction volumes you must often break with standard paradigms for normalization and updating. Conventional RDBMS locking techniques can destroy scalability no matter how much hardware you throw at the problem.
It is easy to dismiss all of it as stodginess or general wrong-headedness - even incompetence. But be careful because this isn't always the case.
And by the way: There are other models besides the RDBMS, and the alternative to an RDBMS is not necessarily "flat files" - contrary to the experience of of most coders today. There are transactional hierarchical DBMSs that can handle much higher throughput than an RDBMS. IMS is still very much alive in large IBM shops, for example. Other vendors offer similar software for different platforms.
Of course in a 4-man shop maybe none of this applies.
Sign them up for some decent trainings and then it's up to you to convince them that with new technologies a lot more is possible (or at least easier!).
But I think the most important thing here is that professional, certified trainers teach them the basics first. They will be more impressed by that instead of just one of their colleagues telling them: "hey, why not use this?"
Related post here.
The following may not apply in yr situation, but you make very little mention of technical details, so I thought I'd mention it...
Sometimes, if the access patterns are very different for current data than for historical data (I'm making this example up, but say that Current data is accessed 1000s of times per second, and accesses a small subset of columns, and all current data fits in less than 1 GB, whereas, say, historical data uses 1000s of GBs, is accessed only 100s of times per day, and access is to all columns),
then, what your co-workers are doing would make perfect sense, for performance optimization. By separating the current data (albiet redundantly) you can optimize the indices and data structures in that table, for the higher frequency access paterns that you could not do in the historical table.
Not everything that is "academically", or "technically" correct from a purely relational perspective makes sense when applied in an actual practical situation.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
We prototype a design, GUI, just to analyze a particular problem, proof of concept, etc. Sometimes we throw away the prototype, and sometimes it ends up in the production code. We use different languages, technologies, strategies, and styles to prototype.
What are the different situations you prototype usually and how do you prototype? Any good resource out there to master the craft?
One hot title is Effective Prototyping for Software Makers. The issue is that there are several schools of thought.
Rapid Prototyping. Use fancy tools; get
something done soon.
Evolutionary Prototyping. Evolve from prototype to production.
Some of this is legacy thinking, based on an era where tools were primitive and projects had to be meticulously planned from the beginning. When I started in this industry, the "green-screen" character-mode applications where rocket science and very painful to mock up. Tools and formal techniques were essential to manage the costs and risks.
This thinking is trumped by some more recent thinking.
Powerful tools remove the need for complex prototypes. HTML mockups can be slapped together quickly. Is it still a prototype when you barely have to budget or plan for it?
[You can mock it up in MS-Word and save it as HTML. It's quicker for a Business Analyst to do it than to specify it and have a programmer do it.]
Also, powerful tools can reduce the cost of mistakes. If it only took a week to put something together -- production-ready -- what's the point of an formal prototype effort?
Agile techniques reduce the need to do quite so much detailed up-front planning. When you put something that works in the hands of users quickly, you don't have quite so much need to be sure every nuance is right before you start. It just has to be good enough to consider it progress.
What can happen is the following. [The hidden question is: is this still "prototyping" -- or is this just an Agile approach with powerful tools?]
Using tools like Django, you can put together the essential, core data structure and exercise it almost immediately. Use the default Django admin pages and you should be up and running as soon as you can articulate the data structures and write load utilities.
Then, add presentation pages wrapped around real, working data. Be sure you've got things right. Since you've only built data model and template-driven HTML pages, your investment is minimal. Explore.
Iterate until people start asking for smarter transactions than those available in the default admin pages. At this point, you're moving away from "discovery" and "elaboration" and into "construction". Did you do any prototyping? I suppose each HTML template you discarded was a kind of prototype. For that matter, so where the ones you kept.
The whole time, you can be working with more-or-less live, production users.
Personally, I believe a true prototype should not be much more than diagrams drawn on paper to demonstrate the flow of whatever it is you are trying to achieve. You can then use these documented flows to run through several scenarios to see if it works with whoever has requested the functionality.
Once the paper prototype has been modified to a point where it works then use it as a basis to start coding properly.
The benefits of this process are that you can't end up actually using the prototype code in production because there isn't any. Also, it is much easier to test it with business experts as there is not any code for them to understand.
Right now I just draw pictures. I would like to do more, but to get something to a point where the users would understand any better than a picture would cost to much time.
I am interested in seeing some of these responses :)
I should mention where I work is just me and one other guy to play the roles of project manager (collect data, design spec & app), dbas, coders, tool researcher/developer, et al that comes with the job of making an app for a small company.
For webapps, start with an pure (x)HTML + CSS mock-up, and then use a framework that makes it easy to implement the functionality.
Template-based frameworks are very good for this, but we've had some good experiences with JSF + Facelets + Seam, too.
The main reason for doing prototypes is reducing risk. Thus, we do UI prototypes, which are really not very helpful unless they actually do something that a user can play around with. Just as important, we also do prototypes to either prove that something can work or figure out how something does work.
I start off making a prototype that makes the most interesting part work, then I throw it away and move on to a new, more interesting project...
*kills self*