to rewrite or not to rewrite - scale

so I need to maintain this old legacy project, where one part of it is amaturely written with Wordpess, lots of crappy custom plugins, lots of ducktaped scripts, that solve one or other problem, database is designed very very purely too, there is also other part written with Zend, which is thightly connected with WP part, there is also this other "masterpiece" project connected to first project data. Main table contains around 1.5M records and needs to be normalized too. now this big ball of nails "works", also it has lots of LOCs, which are result of bad foundations, so it is a huge pain to maintain. Thing is, the way I see this, by not rewriting we are loosing on the long term, because we lose flexibility both from technology and busi
ness perspective,plus it starts not to scale, but rewriting is a risk, plus we would need to convert old data to new data structures. Hacker part of me wants to break this, take a risk and do it right, but at the same time I am having a feeling that my immaturity tries to take a too big bite at once. So what do you think?

In situations like yours, 9 out of 10 times people suggest a rewrite and they are wrong.
Unless you have great application level knowledge about what the system is doing you will not be able to rewrite it successfully, quickly.
If the system is working today, but is crappy in many ways, and you have management buy-in (they own the software) to "fix stuffs", then I suggest an incremental approach will often be better than a full-on rewrite.
I suspect that the database is giving you the most headaches, so that may be the best place to start. Start by understanding the problem that it is currently solving and write that down. If there is no layer between the software and the db (other than jdbc or the like) add a layer. Once there is a layer separating the db from the application, it will be easier to change the db (and the layer) while minimizing the impact on the application.
At some point you will be happy with the first thing you changed. At that point, fix some other part. Repeat until the system is "more better".
Concerning risk: Taking risks is not bad, but being careless is terrible. Understand the risks and plan to mitigate them.

In situations like yours, 9 out of 10 times, I suggest to rewrite. Rationally, the situation a) won't get better, and b) will certainly get worse. You should bite the bullet before it's too late.
And by too late I mean something breaks completely and not only will you have to rewrite the whole thing, but your service will also be offline (ergo you may be also losing users/customers).

A good strategy in this situations is to "strangle" your application as described by Martin Fowler:
http://martinfowler.com/bliki/StranglerApplication.html
The strategy is to gradually create a new system around the edges of the old, letting it grow slowly over several years until the old system is strangled.
I already strangled a legacy application using this approach with great results and practically no offline time

Related

Corp IT Systems direction. Invest in A or B?

This is more of a general question about which direction would be a better investment for the company.
Our company's core business application is written in Visual FoxPro and is about 9+ years old. The database is huge 15+ gigs and the core logic is complex and to make matters worse the data model is terrible. The two guys that built it and have maintained it all these years are at least in their 50's, so needless to say retirement or possibly death could come within the next decade or so.
This VFP app drives all our core business functions and requires terminal services and citrix to access it from the outside world. Our web apps have to interface with it via ODBC and we are always having performance issues with it. The servers that run this system are also very old, like Win 2000 server and are falling apart.
Recently we have been having meetings about upgrading the systems that run this core app as well as other services like email and file storage. The biggest expense however is buying new server hardware, OS licensing, Terminal Services licensing, Citrix licensing etc to solve some performance and outside access issues we are currently having as well as just generally bringing us to date on our systems.
The price tag is going to be in the $55K to $65K price range. So as a web developer my point of view is that this is a huge waste of money! My solution would be to invest that money in rewriting the core system to run on the web based .Net platform. This would eliminate the need for Terminal Server and Citrix licensing along with the pricey hardware and configuration management to run it on. I don't see the point in investing this kind of money in an antiquated system that should be on it's way out anyways.
I am looking to get some convincing arguments as to why this is a waste of money. Hopefully there is someone here that has faced this type of situation before that can give me some points of view. The hardware upgrade seems to be the easiest road to take because they will just have a consultant come in and do it all. A software development project would take longer, require more resources and possibly cost a little more money.
The short-term rewrite vs. re-hardware argument cannot be won. Hardware and licenses are always cheaper than a rewrite. And hardware plus license seems to involve no risk.
You can't win on ROI argument. Unless the system is trivial and you are a genius, it will always cost $100K or more to rewrite an application that actually does something. Think multiple person years.
You might win the "technical debt" argument. Change is getting more and more complex, risky and expensive. The longer this code is perpetuated, the more risk and cost accumulates.
The real question is "start to fix now?" or "wait until it breaks and suffer later?" And that has no definite $-valued answer.
You can't compete on money, so you have to compete on risk, features, growth, maintainability, adaptability, standards compliance, security, creating unique value for each customer, etc., etc.
"We are now looking at a larger base of customers and more data". That's an argument you might be able to win.
(I'm over 50, I'm not planning on dying any time soon. That argument doesn't win hearts and minds. Unless they're over 80, you can't really use age except as way to get your argument ignored.)
Focus on the cost (and risk) of making changes.
Prove that you have a web-based solution that makes changes less costly and less risky.
Further, dig into what's there and find parts that can be replaced by a web framework. Code you don't write is cheaper to maintain the code you write.
Every project needs a cost-benefit analysis. If a $60,000 one-time investment will resolve all issues for the next 10 years, then it is (probably) far more economical than hiring a team of developers for even one year to build a newer, better system.
On the other hand, if it's already costing $50,000/year in maintenance and this capital cost is just to keep the system alive, and you'll need to spend another $60k in a few years from now, then it warrants a serious consideration with respect to a re-design.
Or you could take the middle road and start wrapping it up in something opaque like a web service, then gradually swapping out components with better (more efficient, more maintainable, etc.) internal components. Lots of companies go this route because it defers the up-front costs of a rewrite; if necessary you can defer IT resources elsewhere.
S.Lott is right, though - it's likely that you won't be able to compete on cost alone. You have to try to quantify the risks associated with these ancient systems - for example, how much it will cost the company to find and train qualified FoxPro developers if the original programmers decide to quit (or, to use the parlance of so many managers I've met, "run over by a bus")...
Just to add some further perspective to this: Before .NET (and for a few years after) I conducted most of my projects exclusively in Delphi. At the time, it really was a great choice for enterprise development. I was actually the person who didn't want to "upgrade." After a while, however, it became apparent to both myself and my higher-ups that this scared people outside the company.
Investors, auditors, everyone - they didn't like the idea that our core IT asset was done in some "obscure" language. Of course, Delphi wasn't/isn't really that obscure; there's a "delphi" tag here on SO with a count of 3340. But let's use SO as our example - here are the current counts:
c# - 57293
.net - 30577
asp.net - 26600
java - 31023
vb.net - 5996
delphi - 3340
foxpro - 69
vfp - 27
Let those numbers sink in for a while. Delphi, my tool of choice at the time, now has less than 10% of the representation of C#, and this made non-techies nervous. Foxpro/VFP is not even at 1%. I can't even remember how many times I had to answer questions like:
What happens if the lead developer (me) quits or gets run over by a bus?
How difficult/costly will it be to hire programmers in that field?
What if the vendor stops supporting it? (This almost happened)
What if we want to get outside help? Consultants? Security audits?
How easy will it be to get it to work with outside products?
Blah blah blah, worry worry worry, was how I felt at the time, and this was a product that wasn't really that obscure. In your case, we're talking about FoxPro here. FoxPro has gotten to be almost like COBOL; sure, it's still around, there are people out there who know it, but who starts a new project in FoxPro today? It's boring, it's downright ghetto. VB6 is starting to become ghetto, and VB/Access effectively replaced FoxPro so many years ago.
I'm obviously being slightly melodramatic here, but if I were you, this is the angle I would be taking. Forget about the short-term economics, forget about the age, and focus on the obscurity of the product. How many genuine, qualified responses do they think they'll get if they put a want-ad out for a FoxPro developer? What kind of pay would they have to offer for a position like that? What would the turnover be like? This may all seem remote if these two developers have been there for 20-odd years, but when you're running a multimillion-dollar business, you ought to know that it's never a good idea to stake your very survival on one or two employees - not if you can help it.
In general supplementing a poor system this tons of hardware is a bad plan, i would probably say that it#s better to rewrite, but it's hard to say without knowing the details.
Bear in mind that a decent rewrite should improve performance, reliability and maintainabilty so the potential savings are large and will only increase year on year, even if the inital investment is a little more.
In order to figure out if it is worthwhile, you have to calculate, in addition to the costs of a rewrite:
Documenting everything the system currently does, and reverse-engineering the requirements.
Writing unit and integration tests for everything that currently exists. This probably doensn't exist already, but should be.
Cost of maintaining the new system. The new system isn't going to eliminate maintenance costs, merely reduce it. How much will you save?
Cost of hardware for the new system. The new system is going to have to run on something.
Licensing costs for any software/etc. that are needed for the new system. Is everything going to be open source? Or are you going to need several Visual Studio Test Editions for your developers and testers?
Cost of hiring new personnel to do the development. In addition to the straight salary costs, there are office costs. The total might be $300,000, for say 3 developers, counting salary, office space, equipment, licenses, health care benefits.
Time horizon for the saving. The saving isn't going to occur immediately. It is going to occur in the future. In the meantime, they have to still pay for the licensing for the current system, because something has to do the job until the new system is put in place.
Cash flow issues. Because of the above, in the short term they are going to need more money to fund the development. The actual costs are higher, because they essentially have to get a loan, raise equity, or have an opportunity cost (they aer going to have to forego some other investment opportunity to pursue the rewrite).
Business risk. There may be a danger that the rewrite might cost more, work worse,
Two important numbers:
Number of "FoxPro" jobs listed in San Francisco's craigslist right now: 2.
Number of ".NET" jobs listed in San Francisco's craigslist right now: 252.
A lot of other points that have been mentioned are valid. However, you can spend as much as you want on hardware, but the fact is that if something breaks and you need help, you are going to have a heck of a time finding more people to help.
Sounds like a good time to start talking about a migration¹ to newer, better-supported technologies. (And in 10 years when .NET is old hat, you can do it all over again :)
[1] And evolve the system, don't rewrite it. I would guess your current system grew very organically based on needs at the time. There's no way that you'll be able to completely replace all of that (at least, not without a couple of years and a few miillion bucks).
As a historical VFP devloper (over 20yrs with Foxpro/VFP, and STILL have people asking me to write / update their systems with VFP, for a variety of reasons), its still very powerful. However, while researching and taking much of my OOP and development experience and working with .Net, I do find some things in .Net much easier, especially the strong type-casting. However, doing a basic report REQUIRES all strong type-casting to the database tables / structures / objects, and in many cases thus far, a PITA to do.
The price tag for a rewrite is always of significant consideration, but so too is the collapse of ANY system... regardless of VFP, VB, Access, or other. I would strongly suggest getting a consulting company in to help in the re-modeling of your system and maybe act as a project manager / mentor to your in-house staff of programmers who may be able to offer their talents even though it may require some training in the new development environment. This way, you can get a good basis of a strong talent in the language, yet keep some costs down by using your own programming staff -- yet you may need to hire supplemental programming staff. The learning curve from VFP to .Net is there, and can still be a head scratcher.
There are a variety of companies out there who were VFP specialists that have subsequently migrated their services to .Net world and may offer a perfect match for your organization having the historic knowledge and professional experience of BOTH worlds. I know they can act as mentors too for the development of such work.
You can only say it is a waste of money after you analyzed the ROI - it will depend heavily on how much does it cost to rewrite the system.
Classic mistake on JOS - "system is a mess, let's rewrite it".
It will be like looking at this old building and seeing a toothpick and wondering why it is there. You figure it isn't needed, and pull it out.
Suddenly the building collapses around your head :)
It might be a better idea to
Consider rewriting parts of the system for better maintainability.
Optimizing the system for better performance.
Abstracting the Foxpro specific parts, so it could be more easily converted to some other technology.
This incremental approach would reduce risk, and provide some short-term improvements.
There is no magic bullet here for the company. The only way to be sure is to take the hit on a new server to get the stability and speed benefits that brings to the existing business-critical software. Then once that is parked for a few years start re-engineering the thing on a different platform like .NET if that's what you want to do. Bearing in mind that you will have to migrate the VFP data into the new database structure at some point.

How do you deal with design changes?

I just finished working on a project for the last couple of months. It's online and ready to go. The client is now back with what is more or less a complete rewrite of most parts of the application. A new contract has been drafted and payment made for the additional work involved.
I'm wondering what would be the best way to start reworking this whole thing. What are the first few things you would do? How would you rework the design in a way that you stay confident that the stuff you're changing does not break other stuff?
In short, how would you tackle drastic application design changes efficiently (both DB and code)?
Presuming that you have unit tests in place, this is just refactoring.
If you don't have unit tests in place, then
Write unit tests for the parts you're likely to keep.
Write unit tests for the parts you're going to change.
Run the tests. The "keep" should pass. The "change" should fail.
Start refactoring until the tests pass.
This is NOT-A-NEW thing in software and people have done this and written a lot about this.
Try reading
Working Effectively with Legacy
Code
Refactoring Databases:
Evolutionary Database Design
The techniques explained here are invaluable to sustain any kind of long running IT projects.
Database design is different from application design in this regard.
Very often, client rethinking changes the application completely, but changes little, if anything, in the fundamental underlying data model of the enterprise. The reason for this is that clients tend to think in terms of business processes, but not in terms of fundamental data. Business processing and data processing are tightly coupled. Data storage is less tightly coupled.
In the days of classical database design, designers learned how to exploit this pattern, by dividing their database design into (at least) two layers: logical design and physical design. There are any number of times that a change of business process requires a complete rewrite of the application, and a major rework of the database physical design, but requires few, if any, changes to the logical design.
If your database design didn't separate out the layers like this, it's hard to tell what gets affected and what doesn't. Start with your tables and columns. Ask yourself if any of the changes require removing any column from the table it's in, or require inventing new columns. If the answer is no, you're in luck. Next, look at the constraints placed on the database (things like PRIMARY KEY, FOREIGN KEY, UNIQUE and NOT NULL). These constraints might be tightened or loosened by the client's changes. If not, you're in luck. If you didn't declare any constraints in the database, and chose to do all your integrity protection in application code, you're probably out of luck.
You still have a fair amount of work to do in terms of changing the indexes on the tables, and the way the application works with the data. But you've salvaged part of the investment in the old system.
The application itself is much more vulnerable to client changes in process than the database. If your database design was completely driven by your application design, you may be out of luck.
If it's THAT drastic of a change it might be best to just start over. I've worked on a number of projects that have gone through some drastic changes.
Starting over gives you a chance to use experience learned since the last project and provide a more efficent product.
I would recommend against trying to re-work the old site into the new site, you'll probably spend more time fiddling around changing things than you would have if you had just re-written it.
Best of luck to you !
How would you rework the design in a way that you stay confident that the stuff you're changing does not break other stuff? In short, how would you tackle drastic application design changes efficiently (both DB and code)?
Tests, code complexity/coverage metrics, and a continuous integration system. Run them early and often, so you know which parts are the riskiest and where to start writing.
These will become your safety nets when you have to make potentially problematic changes. If something does break, your CI system will tell you, and you won't have spent weeks down some rabbit hole before you realize there's a problem.
Sometimes you do things better the second time around so just try and stay positive. Plus you will have more domain knowledge this time around.

MUD Programming questions

I used to play a MUD based on the Smaug Codebase. It was highly customized, but was the same at the core. I have the source code for this MUD, and am interested in writing my own (Just for a fun project). I've got some questions though, mostly about design aspects. Maybe someone can give me a hand?
What language should I use? Interpreted or compiled? Does it make a difference? SMAUG is written in C. I am comfortable with a lot of languages, and have no problem learning more.
Is there a particular approach I should follow to not hinder performance? Object Oriented, functional, etc?
What medium should I use for storing data? Flat files (This is what SMAUG uses), or something like SQLite. What are the performance pros/cons of both?
Are there any guides that anyone knows of on how to get started on a project like this?
I want it to scale to allow 50 players online at a time with no decrease in performance. If I used Ruby 1.8 (very slow), would it make a difference compared to using Python 3.1 (Faster), or compiled C/C++?
If anyone can lend a hand and give some info or advice, I'd be eternally grateful.
I'll give this a shot:
In 2009, for a 50 player game, it doesn't matter. You may want to pick a language that you're familiar with profiling tools for, if you want to grow it further, but since RAM is so cheap nowadays, the constraints driving the early LPMUD (which I have experience with) and DikuMUD (which your Smaug is derived from) don't apply. (LPMUD could handle ~10-15 players on a machine with 8MB RAM)
The programming style doesn't necessarily lead to performance difficulties, large sites like Amazon's 'obidos' webserver are written in C, but just-as-large sites like the original Yahoo Stores were written in Lisp, StackOverflow is written in ASP.NET, etc. I'd /personally/ use C but many people would call me a sadist.
Flat Files are kind of pointless in today's day and age for lots of data storage, there are specific-case exceptions (Large mailservers sometimes use 'maildir' which is structured flat-files, for example). The size of your game likely means you won't be running into huge slowness driven by data retrieval delays, but the data integrity in-case-of-crash are probably going to make the most convincing argument.
Don't know of any guide, but what I'd do is try to get the game started as a dumb chat server to start, make sure users can log in and do something (take their input and dump it to all other users), then build that up to allowing specific logins, so you'll start facing the challenge of username/password handling, and user option setting / storage / retrieval ... then start adding the gamedriver elements (get tic tac toe games working in game), then go a little more complex (get a 5-room setup working with objects you can pick up / drop / bash each other with), then add some non-player characters, and THEN worry about slurping in the Diku-derived smaug castles / etc and working with them. :)
This is a bit off the cuff , I'm sure there are dissenting opinions. :) Good luck!
This is a text based game, right? In that case, with current hardware, it seems all you would have to worry about is not accidentally creating an O(n**2) algorithm. Even that probably wouldn't be too bad with 50 users.

Upgrading from Drupal 6 to Drupal 7: best programmer's practices?

Although I am using drupal since the D4 series, I only started developing professionally for it with D6, so - despite I did various site upgrades - I was never faced by the task of having to port my own code to a new version.
I know the Drupal community will come up with lot of technical support about changed API's and architectural changes (see the deadwood module for D5-D6 or even these stubs of D6-D7 how-to's for modules and themes).
However what I am looking for with my question is more in the line of strategy thinking, or in other words, I am looking for inputs and advice on how to plan / implement / review the process of porting my own code, in the light of what colleague developers learned by previous experience. Some example:
Would you advice to begin to port my modules as soon as I have time for doing it, and to maintain a concurrent D7 for some time (so I am "prepared" for the D-day) or would you advice to rather wait for the day in which the port will be actually imminent and then upgrade the modules to D7 and drop the D6 version?
Only some of my modules have full test coverage. Would you advice to complete test coverage for the D6 version so to have all tests working to check the D7 port, or would you advice to write my test directing at porting time, to test the D7 version?
Did you find that being an early adopter gives you an edge in terms of new features and better API's or did you rather find that is more convenient to delay the conversion so as to leverage the larger amount of readily available contrib modules?
Did you set for yourself quality standards / evaluation criteria or did you just set the bar to "if it works, I'm happy"? Why? If you set certain standards or goals, what did they where / what will they be? How did they help you?
Are there common pitfalls that you experienced in the past and that you think are applicable to the D6-D7 porting process?
Is porting a good moment to do some refactoring or it is just going to make everything more complex to be put back together?
...
These questions are not an exhaustive list, but I hope they give an idea of what kind of information I am looking for. I would rather say: whatever you think is relevant and I did not list above gets a "plus"! :)
If I did not manage to express myself clearly enough, please post a comment with the info you think I should add in the question. Thank you in advance for your time!
PS: Yes I know... D7 is not yet out and it will take months before important contrib modules will be upgraded... but it's never too early to start thinking! :)
Good questions, so let's see:
(when to start porting)
This certainly depends on the complexity of the modules to port. If there are really complex/large ones, it might be useful to start early in order to find tricky spots while not being under pressure. For smaller/standard ones, I'd try to find a bigger time slot later on where I can port many of them in a row in order to get the routine stuff memorized quickly (and benefit from the probably improved documentation).
(test coverage)
Normally I'd say that having a good test coverage before starting refactoring/porting would certainly be advisable. But given that Drupal-7 introduces a major change concerning the testing framework by moving it to core, I'd expect the need to rewrite a significant amount of tests anyway. So if there is no need to maintain the Drupal-6 versions after the migration, I'd save the time/trouble and aim for increased coverage after the porting.
(early adopter vs. wait and see)
Using Drupal since the 4.7 version, we have always waited for at least the first official release of a new major version before even thinking about porting. With Drupal 6, we waited for the views module before porting our first site, and we still have some smaller projects on Drupal-5, as they are working just fine and it would be hard to justify the extra bill for our clients as long as there are still maintenance/security fixes for it. There is just so much time in a day and there is always this backlog of bugs to fix, features to add, etc., so no use playing with unfinished technology while there are more imminent things to do that would immediately benefit our clients. Now this would certainly be different if we'd have to maintain one or more 'official' contributed modules, as offering an early port would be a good thing.
I'm a bit in a bind here - being an early adopter certainly benefits the community, as someone has to find that bugs before they can get fixed, but on the other hand, it makes little business sense to fight hour after hour with bugs others might have found/fixed if you'd just waited a bit longer. As long as I have to do this for a living, I need to watch my resources, trying to strike an acceptable balance between serving the community and benefiting from it :-/
(quality standards)
"If it works, I'm happy" just doesn't cut it, as I don't want to be happy momentarily only, but tomorrow as well. So one of my quality standards is that I need to be (somewhat) certain that I 'grokked' new concepts well enough in order to not just makes things work, but make them work like they should. Now this is hard to define more precisely, as it is obviously impossible to know if one 'got it' before 'getting it', so it boils down to a gut feeling/distinction of 'yeah, it kinda works' vs. 'yup, that looks right', and one has to accept that he will quite regularly be wrong about this.
That said, one particular point I'm looking out for is 'intervene as early as possible'. As a beginner, I often tweaked stuff 'after the fact' during the theming stage, while it would have been much easier to apply the 'fix' earlier in the processing chain by means of one hook or the other. So right now, whenever I'm about to 'adjust' something in the theme layer, I deliberately take a small time out to check if this can not be done more cleanly/compatible within a hook earlier on. As I expect Drupal-7 to add even more hooking options, this is something I will pay extra attention to, as it usually reduces conflicts and sudden 'breaking of stuff' when adding new modules.
(common pitfalls)
Well - mainly porting to early, finding out afterwards/in between that one or more needed modules were not available for the new version at all, or only in dev/alpha/early beta state. So I'd make sure to compile a complete list of used/needed modules first, listing their porting state, along with a quick inspection of their issue queues.
Besides that, I have so far always been very pleased with the new versions and their improvements, and I'm looking forward for Drupal-7 again.
(refactoring while porting)
One could say that porting is a rather large refactoring in itself, so there is no need to add to the complexity by restructuring non porting related stuff. On the other hand, if you already have to shred your modules to pieces anyway, why not use the opportunity to make it a major overhaul? I'd try to draw a line based on complexity - for big/complex modules, I'd do the port as straight as possible, and refactor more later on, if need be. For smaller modules, it shouldn't really matter, as the likelihood of introducing subtle bugs should be rather small.
(other stuff)
... need to think about it ...
Ok, other stuff:
Resource needs - given some of the Drupal-7 threads, it looks like they are likely to go up, so this should be evaluated before porting smaller sites that sit on a shared/restricted hosting account.
Latest versions first - This one is rather obvious and always stressed in the upgrade guides, but nevertheless worth mentioning: Upgrade core and all modules to their latest current version first before doing a major upgrade, as the upgrade code is highly likely to depend on the latest table/data structures to work correctly. Given Drupals 'piecemeal', one step at a time update strategy, it would be very hard to implement upgrade code that would detect different pre-upgrade states and acted accordingly.

How to Convince Programming Team to Let Go of Old Ways?

This is more of a business-oriented programming question that I can't seem to figure out how to resolve. I work with a team of programmers who have been working with BASIC for over 20 years. I was brought in to help write the same software in .NET, only with updates and modern practices. The problem is that I can't seem to get any of the other 3 team members(all BASIC programmers, though one does .NET now as well) to understand how to correctly do a relational database. Here's the thing they won't understand:
We basically have a transaction that keeps track of a customer's tag information. We need to be able to track current transactions and past transactions. In the old system, a flat-file database was used that had one table that contained records with the basic current transaction of the customer, and another transaction that contained all the previous transactions of the customer along with important money information. To prevent redundancy, they would overwrite the current transaction with the history transactions-(the history file was updated first, then the current one.) It's totally unneccessary since you only need one transaction table, but my supervisor or any of my other two co-workers can't seem to understand this. How exactly can I convince them to see the light so that we won't have to do ridiculous amounts of work and end up hitting the datatabse too many times? Thanks for the input!
Firstly I must admit it's not absolutely clear to me from your description what the data structures and logic flows in the existing structures actually are. This does imply to me that perhaps you are not making yourself clear to your co-workers either, so one of your priorities must be to be able explain, either verbally or preferably in writing and diagrams, the current situation and the proposed replacement. Please take this as an observation rather than any criticism of your question.
Secondly I do find it quite remarkable that programmers of 20 years experience do not understand relational databases and transactions. Flat file coding went out of the mainstream a very long time ago - I first handled relational databases in a commercial setting back in 1988 and they were pretty commonplace by the mid-90s. What sector and product type are you working on? It sounds possible to me that you might be dealing with some sort of embedded or otherwise 'unusual' system, in which case you do need to make sure that you don't have some sort of communication issue and you're overlooking a large elephant that hasn't been pointed out to you - you wouldn't be the first 'consultant' brought into a team who has been set up in some manner by not being fed the appropriate information. That said such archaic shops do still exist - one of my current clients systems interfaces to a flat-file based system coded in COBOL, and yes, it is hell to manage ;-)
Finally, if you are completely sure of your ground and you are faced with a team who won't take on board your recommendations - and demonstration code is a good idea if you can spare the time -then you'll probably have to accept the decision gracefully and move one. Myself in this position I would attempt to abstract out the issue - can the database updates be moved into stored procedures for example so the code to update both tables is in the SP and can be modified at a later date to move to your schema without a corresponding application change? Make sure your arguments are well documented and recorded so you can revisit them later should the opportunity arise.
You will not be the first coder who's had to implement a sub-optimal solution because of office politics - use it as a learning experience for your own personal development about handling such situations and commiserate yourself with the thought you'll get paid for the additional work. Often the deciding factor in such arguments is not the logic, but the 'weight of reputation' you yourself bring to the table - it sounds like having been brought in you don't have much of that sort of leverage with your team, so you may have to work on gaining a reputation by exceling at implementing what they do agree to do before you have sufficient reputation in subsequent cases - you need to be modded up first!
Sometimes you can't.
If you read some XP books, they often say that one of your biggest hurdles will be convincing your team to abandon what they have always done.
Generally they will recommend letting people who can't adapt go to other projects (Or just letting them go).
Code reviews might help in your case. Mandatory code reviews of every line of code is not unheard of.
Sometime the best argument is an example. I'd write a prototype (or a replacement if not too much work). With an example to examine it will be easier to see the pros and cons of a relational database.
As an aside, flat-file databases have their places since they are so much easier to "administer" than a true relational database. Keep an open mind. ;-)
I think you may have to lead by example - when people see that the "new" way is less work they will adopt it (as long as you don't rub their noses in it).
I would also ask yourself whether the old design is actually causing a problem or whether it is just aesthetically annoying. It's important to pick your battles - if the old design isn't causing a performance problem or making the system hard to maintain you may want to leave the old design alone.
Finally, if you do leave the old design in place, try and abstract the interface between your new code and the old database so if you do persuade your co-workers to improve the design later you can drop the new schema in without having to change anything else.
It is difficult to extract a whole lot except general frustration from the original question.
Yes, there are a lot of techniques and habits long-timers pick up over time that can be useless and even costly in light of technology changes. Some things that made sense when processing power, memory, and even disk was expensive can be foolish attempts at optimization now. It is also very much the case that people accumulate bad habits and bad programming patterns over time.
You have to be careful though.
Sometimes there are good reasons for the things those old timers do. Sadly, they may not even be able to verbalize the "why" - if they even know why anymore.
I see a lot of this sort of frustration when newbies come into an enterprise software development shop. It can be bad even when the environment is all fairly modern technology and tools. If most of your experience is in writing small-community desktop and Web applications a lot of what you "know" may be wrong.
Often there are requirements for transaction journaling at a level above what your DBMS may do. Quite often it can be necessary to go beyond DB transaction semantics in order to ensure time-sequence correctness, once and only once updating, resiliancy, and non-repudiation.
And this doesn't even begin to address the issues involved in enterprise or inter-enterprise scalability. When you begin to approach half a million complex transactions a day you will find that RDBMS technology fails you. Because relational databases are not designed to handle high transaction volumes you must often break with standard paradigms for normalization and updating. Conventional RDBMS locking techniques can destroy scalability no matter how much hardware you throw at the problem.
It is easy to dismiss all of it as stodginess or general wrong-headedness - even incompetence. But be careful because this isn't always the case.
And by the way: There are other models besides the RDBMS, and the alternative to an RDBMS is not necessarily "flat files" - contrary to the experience of of most coders today. There are transactional hierarchical DBMSs that can handle much higher throughput than an RDBMS. IMS is still very much alive in large IBM shops, for example. Other vendors offer similar software for different platforms.
Of course in a 4-man shop maybe none of this applies.
Sign them up for some decent trainings and then it's up to you to convince them that with new technologies a lot more is possible (or at least easier!).
But I think the most important thing here is that professional, certified trainers teach them the basics first. They will be more impressed by that instead of just one of their colleagues telling them: "hey, why not use this?"
Related post here.
The following may not apply in yr situation, but you make very little mention of technical details, so I thought I'd mention it...
Sometimes, if the access patterns are very different for current data than for historical data (I'm making this example up, but say that Current data is accessed 1000s of times per second, and accesses a small subset of columns, and all current data fits in less than 1 GB, whereas, say, historical data uses 1000s of GBs, is accessed only 100s of times per day, and access is to all columns),
then, what your co-workers are doing would make perfect sense, for performance optimization. By separating the current data (albiet redundantly) you can optimize the indices and data structures in that table, for the higher frequency access paterns that you could not do in the historical table.
Not everything that is "academically", or "technically" correct from a purely relational perspective makes sense when applied in an actual practical situation.

Resources