Is there a MDSD/MDA success story for a real world application? [closed] - mda

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am currently facing a situation where I as an advocate of test driven development have to compete with an advocate of model driven software development (MDSD) / model driven architecture (MDA).
In my opinion, code generation is a valuable tool in my toolbox and I make heavy use of templates and automation when needed. I also create diagrams in UML when I think this helps to understand the inner working or to discuss architecture on the white board. However, I strongly doubt that creating software via UML (creating statecharts and sequence diagrams to create working code not only skeletons of code) is more efficient for multi tier applications (database layer, business/domain layer and a Gui, maybe even distributed). It seems to me when it comes to MDSD, the CASE tooling suddenly isn't just a tool anymore but it is the thing to satisfy: As I see it, on the one hand, MDSDevelopers profit from the higher abstraction UML gives them but at the same time they are struggling with modifing the codegenerator/template/engine to fullfill their needs which might be easily implemented (and tested) if used another tool out of their toolbox (VisualStudio, Eclipse,...).
All this makes me wonder if there has been a success story (suceess being that the product was rolled out in time, within the budged and with only few bugs and parts of the software have been reused later on) for a real world application which fullfills this creteria and has been developed using a strict model driven approach:
it has nothing to do with the the Object Management Group (OMG) or with consultants related to MDSD/MDA/SOA/
the application is not related to Business Process Modelling and is not a CASE tool itself
the application is actively used by end user
it has at least three tiers, including a user interface which goes beyond displaying raw table values and is not one of the common MDA/MDSD examples ("how to model a coffee machine, traffic light, dishwasher").

A tiny, but nevertheless useful testimonial on the use of MDSD has been posted on the Model Driven Software Network:
http://www.modeldrivensoftware.net/profiles/blogs/viva-mdd-follow-up-building-a?xg_source=activity
It is a relatively small app being developed, but still a good example of MDSD in action.
More success stories are listed at Metacase's site (http://www.metacase.com/cases/index.html). Metacase sells MetaEdit+, which implements DSM (Domain-Specific Modeling). DSM is just a form of MDSD.
I am also developing ABSE (Atom-Based Software Engineering), another form of MDSD, very close to DSM. ABSE is outlined at http://www.abse.info.

I used MDA and code generation on an embedded system project using 4 processors connected via CAN. We had over 20 axes of motion and many, many sensors. The system was highly robust and maintainable as the mechanical components were evaluated and modified.
We worked in the models and generated code so the models were always up-to-date. We did a careful domain analysis to achieve subject matter isolation. The motor control required very high performance and so was not modeled or generated. Our network drivers were also hand-coded, and we wrote interfaces that allowed bridge services to send events to any service anywhere in the system as needed (although this was tightly controlled so as to minimize interprocessor dependencies).
Using the method took a bit of discipline, but having working models was great because they can be reviewed by non-software types.
Version control and differencing of the models was a bit of a challenge but we had a small, localized team so we were able to avoid merge issues.
The good people at Pathfinder Solutions (our tool vendor) can help mentor you through the project.

You could also take a look at the slides from previous Code Generation conferences. Several of these talks were from successful case studies e.g. http://www.codegeneration.net/cg2009/slides.php

I am working on one of the project for legacy modernization and its using MDA tool named Bluage. Its for a big healthcare organization and its in production so i could say that its successful. MDA is better in case of legacy modernization as it can generate KDM model from some technologies like pacbase which are going to be out of support.

I worked on a MDSD system that generated admin style web apps in Google Closure. I believe that your question is compelling. Too much complexity and your MDSD system is too hard to use. Too simple and you won't generate apps that are useful in the real world. Where MDSD really shines is in saving developer time typing lots of plumbing style code but how can MDSD remain effective over multiple releases? Requirements can go in many directions. That is the real challenge. I recently blogged about my MDSD lessons learned on that project.

Related

Difference of safety-critical SW development [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
When developing safety-critical software using some quality standards (like e.g. IEC 61508 or DO 178-C) developers have to care about many things. I know that the verification in each development step is quite time consuming and expensive. Moreover, I know that some reduced programming languages are used.
But I am interested in the concrete difference to a "normal" SW-development process. I mean in the standard V-Model, verification and testing should also be part of each development step. What do I have to consider finding requirements? What do I have to consider in SW design?
It isn't so much as a change in the "V Model" that helps verify critical system, it's what you do at each step of the way.
For example you may prefer to plan your development using waterfall in order to have verification steps and controlled transition periods. This has the benefit of staying in line with any government regulations that may be in place.
While developing it is common to use a limited subset of assemblies (APIs) in order to prevent developers from preforming dangerous operations. This type of restriction can also ensure that developers utilize the APIs correctly, such as cleaning up objects as a requirement.
Once the product has been developed you'll likely have gone through all of the testing phases. It is common in industry to develop test fixtures in order to verify and generate data to prove to the government or customers that your system says what it does.
In general, this topic is very deep. You did mention standards, one more is the ISO 2008 standard. I think what you should keep in mind is that the process doesn't change much (the life cycle model stays generally the same). But what you do at each step of the model will change depending on the project. You can take classes on Project Management... In fact it is a tract and sometimes a full degree program. So there's tons to learn about process and how to manage different projects.
Googling system critical projects and project management will likely generate a trove of knowledge.
Hope that helps shed some like on the subject.
EDIT: Finding requirements, like in a waterfall process, is very time consuming. It will involve understanding the customers needs and goals of course. In general you have to spend lots of time in this area for government reasons and software architecture. It's not really a different technique... Be explicit, understanding the requirements is most critical. The system shall recover from 90 second timeouts within 5 seconds of resetting. <- its like all other requirements in SW engineering... Explicit and testable. Objective not subjective. Think Grammer Nazi level of consideration.
One example of a safety critical systems is lockheeds F-35... The system requirements manuals are huge and the process to make a change requires meetings and quite a bit of paperwork.

Refactoring an ASP.NET 2.0 app to be more "modern" [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
This is a hypothetical scenario. Let's say you've just been hired at a company with a small development team. The company uses an internal CRM/ERP type system written in .NET 2.0 to manage all of it's day to day things (let's simplify and say customer accounts and records). The app was written a couple of years ago when .NET 2.0 was just out and uses the following architectural designs:
Webforms
Data layer is a thin wrapper around SqlCommand that calls stored procedures
Rudimentary DTO-style business objects that are populated via the sprocs
A "business logic" layer that acts as a gateway between the webform and database (i.e. code behind calls that layer)
Let's say that as there are more changes and requirements added to the application, you start to feel that the old architecture is showing its age, and changes are increasingly more difficult to make. How would you go about introducing refactoring steps to A) Modernize the app (i.e. proper separation of concerns) and B) Make sure that the app can readily adapt to change in the organization?
IMO the changes would involve:
Introduce an ORM like Linq to Sql and get rid of the sprocs for CRUD
Assuming that you can't just throw out Webforms, introduce the M-V-P pattern to the forms
Make sure the gateway classes conform to SRP and the other SOLID principles.
Change the logic that is re-used to be web service methods instead of having to reuse code
What are your thoughts? Again this is a totally hypothetical scenario that many of us have faced in the past, or may end up facing.
You missed the first step that I would go through:
Cost-Benefit Analysis
Refactoring an app because you think it feels old is not a good reason. It's still running (I'm guessing fairly reliably by this point) and your company already has a lot of time and money invested in the code.
You probably also have a team of developers that are familiar with .NET 2.0 and WebForms whereas many may not understand the concepts/code you're trying to introduce.
Before changing anything, figure out how much money is invested, how much money you're going to spend on your changes, and how much money it'll save in the future...
If the numbers don't add up, no business is going to let you proceed.
Make sure you have a complete suite of tests to cover the existing code before you go tearing it up.
Write new features using your new techniques.
Update old code to the new techniques if it needs modification because of a feature request.
Refactor old code that doesn't need changes only if you are bored and have no new features to write.
While it can be tempting, I personally would not make these major changes to the architecture of the application unless they are satisfying specific user requirements. Simply implementing these to make the application better in terms of maintainability sounds like a big risk. You may get 60% of the way through and find major challenges with getting one of these changes to integrate with the rest of the legacy application. It sounds like you'd almost be facing a complete rewrite of the application to ensure everything is consistent with the new architecture. You may invest a lot of time into the rewrite, and find that the maintainability has only improved a little such that recapturing the time invested in the rewrite will take a long time.
If it were really poorly written or in a legacy language like VB6, I would personally say it would be a candidate for a major rewrite. However, .NET 2.0 is a very capable language and for alot of applications is not crippled. From your description it sounds like the application is pretty well designed as is. It actually has a data access layer, business objects, and is layered in some way. This sounds like a pretty good application considering you can identify these attributes. Consider yourself luck that this isn't some app where it's such a mess you can't pick out anything that resembles a particular design pattern.
Maybe there are places where the things are a little nasty, but sometimes it's not pretty where the rubber meets the road and things get wired together.
I agree with other posters, dont rewrite unless you can demonstrate the cost saving benefits.
What you could do is write new areas of the system using eg entity framework, MVC and jQuery because the difference will be transparent to browser users. In time you can move the legacy code over bit by bit as you make upgrades/enhancements.
Its also always easier to implement new technologies on new system code, rather than porting existing code.

Evolutionary vs throwaway prototyping [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Who is winning in the "Low vs High fidelity prototyping" debate?
Should prototype-zero (P0) be the first version of the final product? Or should be P-0 always a throwaway? What approach is the industry favoring?
Excelent article from wikipedia: Software prototyping
A prototype should always be a throwaway - a prototype is used to quickly prove a concept and influence the design of the real product. As such, a lot of things which are important for a real product (a thought-out architecture and design, reliability, security, maintainability, etc.) fall by the wayside. If you do take these things into account when building your prototype, you're not really building a prototype anymore.
My experience with prototypes where the code directly evolved into an actual product shows that the end-result suffers because of it - the lack of a real architecture resulted in a lot of cobbled-together code that had to be constantly hacked to add new features. I've even seen a case the original technology chosen for rapid development of the prototype was not the best choice for the actual product, and a complete re-write was necessary for V2.
I think we, the pedants, have lost this particular battle -- alleged "prototypes" (which by definition should be rewritten from scratch!!!-) are in fact being "evolved" into (often half-baked "betas"), etc.
Even today, I've applauded at the smart attempt by a colleague of mine to recapture the concept, even if the term is a lost battle: he's setting up a way for proofs of concept small projects to be developed (and, if the concept does get proven, transferred to software engineers for real prototyping, then development).
The idea is that, in our department, we have many people who aren't (and aren't in fact supposed to be!-) software developers, but are very smart, computer savvy, and in daily contact with the reality "in the trenches" -- they are the ones who are most likely to smell an opportunity for some potential innovation which could have real impact once implemented as a "production-ready" software project. Salespeople, account managers, business analysts, technology managers -- at our company, they all often fit this description.
But they're NOT going to program in C++, hardly at all in Java, maybe in Python but miles away from "productionized" -- indeed they're far more likely to whip up a smart proof of concept in php, javascript, perl, bash, Excel+VBA, and sundry other "quick and dirty" technologies we don't even want to dream about productionizing and supporting forevermore!-)
So by calling their prototypes "proofs of concept", we hope to encourage them to embody their daring concepts in concrete form (vague natural-language blabberings and much waving of hands being least useful, and alien to the company's culture anyway;-) and yet sharply indicate that such projects, if promoted to exist among the software engineers' goals and priorities, DO have to be programmed from scratch -- the proof-of-concept serves, at best, as a good draft/sketch spec for what the engineers are aiming for, definitely NOT to be incrementally enriched, but redone from the root up!-).
It's early to say how well this idea works -- ask me in three months, when we evaluate the quarter's endeavors (right now, we're just providing a blueprint for them, hot on the heels of evaluating last quarter's department- and company-wise undertakings!-).
Write the prototype, then keep refactoring it until it becomes the product.
The key is to not hesitate to refactor when necessary.
It helps to have few people working on it initially. With too many people working on something, refactoring becomes more difficult.
Response from BUNDALLAH, HAMISI
A prototype typically simulates only a few aspects of the features of the eventual program, and may be completely different from the eventual implementation.
Contrary to what my other colleagues have suggested above, I would NOT advise my boss to opt for the throw away prototype model. I am with Anita on this. Given the two prototype models and the circumstances provided, I would strongly advise the management (my boss) to opt for the evolutionary prototype model. The company being large with all the other variables given such as the complexity of the code, the newness of the programming language to be used, I would not use throw away prototype model. The throw away prototype model becomes the starting point from which users can re-examine their expectations and clarify their requirements. When this has been achieved, the prototype model is 'thrown away', and the system is formally developed based on the identified requirements (Crinnion, 1991). But with this situation, the users may not know all the requirements at once due to the complexity of the factors given in this particular situation. Evolutionary prototyping is the process of developing a computer system by a process of gradual refinement. Each refinement of the system contains a system specification and software development phase. In contrast to both the traditional waterfall approach and incremental prototyping, which required everyone to get everything right the first time this approach allows participants to reflect on lessons learned from the previous cycle(s). It is usual to go through three such cycles of gradual refinement. However there is nothing stopping a process of continual evolution which is often the case in many systems. According to Davis (1992), an evolutionary prototyping acknowledges that we do not understand all the requirements (as we have been told above that the system is complex, the company is large, the code will be complex, and the language is fairly new to the programming team). The main goal when using Evolutionary Prototyping is to build a very robust prototype in a structured manner and constantly refine it. The reason for this is that the Evolutionary prototype, when built, forms the heart of the new system, and the improvements and further requirements will be built. This technique allows the development team to add features, or make changes that couldn't be conceived during the requirements and design phase. For a system to be useful, it must evolve through use in its intended operational environment. A product is never "done;" it is always maturing as the usage environment change. Developers often try to define a system using their most familiar frame of reference--where they are currently (or rather, the current system status). They make assumptions about the way business will be conducted and the technology base on which the business will be implemented. A plan is enacted to develop the capability, and, sooner or later, something resembling the envisioned system is delivered. (SPC, 1997).
Evolutionary Prototypes have an advantage over Throwaway Prototypes in that they are functional systems. Although they may not have all the features the users have planned, they may be used on an interim basis until the final system is delivered.
In Evolutionary Prototyping, developers can focus themselves to develop parts of the system that they understand instead of working on developing a whole system. To minimize risk, the developer does not implement poorly understood features. The partial system is sent to customer sites. As users work with the system, they detect opportunities for new features and give requests for these features to developers. Developers then take these enhancement requests along with their own and use sound configuration-management practices to change the software-requirements specification, update the design, recode and retest. (Bersoff and Davis, 1991).
However, the main problems with evolutionary prototyping are due to poor management: Lack of defined milestones, lack of achievement - always putting off what would be in the present prototype until the next one, lack of proper evaluation, lack of clarity between a prototype and an implemented system, lack of continued commitment from users. This process requires a greater degree of sustained commitment from users for a longer time span than traditionally required. Users must be constantly informed as to what is going on and be completely aware of the expectations of the 'prototypes'.
References
Bersoff, E., Davis, A. (1991). Impacts of Life Cycle Models of Software Configuration Management. Comm. ACM.
Crinnion, J.(1991). Evolutionary Systems Development, a practical guide to the use of prototyping within a structured systems methodology. Plenum Press, New York.
Davis, A. (1992). Operational Prototyping: A new Development Approach. IEEE Software.
Software Productivity Consortium (SPC). (1997). Evolutionary Rapid Development. SPC document SPC-97057-CMC, version 01.00.04.

What are some key concepts for effective development teams? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
Where I work we've recently put together what we call the Development Standards Committee which is tasked with improving our procedures, processes, methodologies, tools, standards, and whatever we think would help us become a more effective team.
We've got a spreadsheet of items that we've ranked and are going to start tackling from the top down. We've got things such as better source control (currently on SourceSafe), implement a bug tracker (such as Mantis of FogBugz), peer code review, move to .Net 3.5, possibly move to some form of Agile, do more actual team development rather than single developer per project type stuff, and some other things...
What do you think are some key things that can make or break a development team? What should we add to this list?
Some additional information: We have about 12 people on our windows team, and about fifty in development if you include all platforms. We want to improve as much as possible for everyone, but we're our biggest focus is the Windows team. All of us have been here for a couple of years at least, so most of us know each other and work together pretty well.
The number of people on your team is actually really important here. There are basic things that every team should implement (source code control, bug tracking, etc), but there are things that are different base don team size. Code reviews on a very small team, for instance, can be more informal.
Moving to Agile is a good idea, unless you're particular development environment makes it a bad idea. Also, you'll not be able to do this without support from the people who are using your software.
Consider doing things to ensure that communication between the team is easier and with less roadblocks - do all your members know each other pretty well? Can you work with each other? Do you understand each other's idiosyncracies? Learning to work as a team is much more important than any random process improvements you can make.
Require comments when you check in code (it's great if you can tie commits back to your bug tracker)
Maybe Static Code analysis, like what's built into Visual Studio
Continuous Integration like CruiseControl
Development teams really need good people to start with, that work well together, but this isn't really an item to add to the list. It does however affect my first recommendation, be pragmatic. If you're not encouraging your developers to think about how they work and can drive them selves to improve, it's really hard to lay down a development environment that will do it for them.
Mentor and Training: If you can't do XP, then at least hook up your Juniors with Seniors whenever you can. Not only will you share knowledge but you'll share the context around your projects you own.
Some sort of Continous Integration and regular, tested, working "releases" make wonders for quality.
as better source control (currently on SourceSafe)
If this is Visual SourceSafe -- you need to change this immediately. Try cvs, svn or even something paid like Perforce.
There exists something called Rational Unified Process that deals with your problem (and much more).

What is your company's stance regarding (technological) 'innovation'? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 months ago.
Improve this question
.NET 3.5, .NET 4.0, WPF, Silverlight, ASP.NET MVC - there's really a lot of new Microsoft technology released / on the horizon to try out these days.
(The examples I gave is all Microsoft technology but this can apply to any language or platform). I am curious how this is handled in the company you work for. A few examples:
Do you have a CTO that determines what technology the company uses?
Are development teams free to choose what technology they use? For example: framework version, classic ASP.NET vs ASP.NET MVC, ADO.NET Entity Framework vs Linq2Sql or NHibernate? Or a mix of these?
What new technologies does the company you work for try out and why?
Does your company have dedicated resources (time) to try out WPF or whatever technology, just for research, or do you try things out in your spare time and try to introduce them to your company?
These are just examples to make my question clearer. To summarize, I'd like to know what this process looks likes, who is responsible, who makes the decisions. Does your company jump on the bandwagon, or is it reluctant to try new technologies? And are you comfortable with this situation?
At the company I work for, we still use .NET 2.0 (although we are now slowly switching to .NET 3.5), haven't seriously looked into ASP.NET MVC, haven't tried out WPF at all, etcetera. And, some find it pretty hard to convince people to do. Is it fair to expect otherwise?
At my company, we have an architecture group that determines which technologies are used. People are welcome to read up on alternative technologies and make suggestions, but at the end of the day, it's the architecture group that makes the decisions.
While this may seem restrictive, it does ensure that all of the development groups are using the same or similar technologies, and moving from one group to the next is fairly easy. As well, by having one group do all the research, you ensure that you don't waste time by having multiple groups duplicate the research effort.
Since I work in such a small company and am I typically either the only developer, or the lead developer in a very small group, I can usually convince my boss to use whatever I think would be the best for a given project/situation.
We stick to what we know for our major and key projects within the company.
For any new "mini" projects that come along, we take the hit on the learning curve to try and build them in the latest technologies if at all possible.
This enables us to get up to speed on these things to then comfortably and safely use these technologies in our major projects as we see fit.
Where I work there is an architect team which looks at technologies from a high level and makes recommendations to various actual teams. A subset of the architect team actually takes the technologies and experiments on them and out of the produces
Internal 1 hour overview sessions
Week long boot camps
Whitepapers/Posters
The more important the technology is the more of that list is produced. All of that just feeds to teams, which combined with customer requirements for technology actually make the decision for what that team should use.
I have a mix answer to this question. Where I work, lower level technical managers are usually the ones that chose a certain technology and sometimes even the developers have the freedom to try something new. For example, I really wanted to learn about JavaScript's Prototype while working on a web site. I made the case to my boss, he was reluctant first because nobody else knew it or had used it before, but gave me the go ahead. It was great for me to be able to learn Prototype and take advantage of it's many built in functionality. Other bigger projects come down from higher management and we don't really have much of a choice. Right now, my company is adopting SAP, so everything is moving into that direction. I don't necessarily want to become an SAP expert, but if I want to stay here, I'll need to at least learn how to work with it.
Every company has its own pace for innovation, and it's dependent first on the comfort level of the managers, and second on whether anybody actually does the work to research and propose using new things. When the managers start getting uncomfortable, innovation slows or stops until they get comfortable again. Some innovations they will never be comfortable with.
Keeping this in mind, I'm not sure how to answer your question about whether or not it's fair to expect more innovation than is happening. Certainly it's reasonable for you to want more; equally, once you've hit your organization's speed limit on innovation, it's not likely to change and, if it does change, it will probably take a long, long time.
I've been given rather large amounts of freedom to change things by various managers in my past, and I took advantage of it. I also ran into the limits on a regular basis, and finally dealt with my frustration by starting my own company. (This may be considered a somewhat drastic measure; certainly by doing do you reduce the time you have to research and develop the very things for which you started your company.)
These days I'm developing rather significant applications in Haskell, and I'm pleased as punch. After a year, I'm starting to get the hang of it, and I certainly have several more years ahead of me just learning what I can do with the tools I have now.
I suppose the summary of my response is: if you want to innovate more than those around you, you need to change your peer group.
I think any company that tries new technology for the sake of it, as its bleeding edge and 'innovative' is crazy. To have a formal 'lets play with new technology to try it out department' is just nuts.... unless they're in the business of providing technology consulting to other businesses.
For everyone else technology is there to help the business get things done. Not to help developers line their CV's with cool sounding TLA's.
The company I'm working at the moment is quite large and has a CTO that chooses 'strategic platforms'. But I've have to say, if you can pick a technology, they're probably using it. They're too big to beat everyone down with the corporate stick, but they try. If the technology will work in the project and bring it in on time, then it gets used.
We need solid and proven platforms for our stuff. And, we don't need anything fancy. Therefore we might go for .NET after 5-10 years or so, hope it's ready by then. On the other hand, Java is already mature enough, so we're using it alongside with C++ and some Jython scripting. These decisions are pretty much autonomous (we're a small shop).
I don't mean to mock bleeding edge developers, but whether you need solidity or newest features obviously depends on what you're working on. Many scientists are still happily using Fortran 77.

Resources