Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
There are at least two possible code files layout schemes:
so called Socks Drawer - where you put files to directories according to its type (*.html in Views; *.cs in Model; controllers in Controllers; services in Services directory, etc);
by feature (or modular) - in which all entities (model, views, controllers, etc) for one feature are grouped together in a directory (and common stuff is extracted as a separate group); and structure for other features are in separate directories;
Everywhere I read the last one is better.
Reference examples:
http://cliffmeyers.com/blog/2013/4/21/code-organization-angularjs-javascript
http://www.codergears.com/Blog/?p=768
https://aelia.co/2012/10/03/how-to-neatly-structure-your-code/
That makes me ask a question: why socks drawer layout for code packaging is more popular than packaging by feature? Why it's forced by frameworks (ASP MVC; Ruby on Rails; etc);
Example of Socks Drawer layout (all controllers are in Controllers directory, all models in Models directory):
Example of by-feature-layout (controllers and entities are in separate groups for each app feature):
TL;DR- The answer about how to organize your project is ultimately a question. That is, what is the best way to represent the files in your project to fit your team's workflow and the constraints of the architecture?
There are several things to consider when deciding how to lay out a project.
Does the underlying platform make distinctions between different types of files, or different functional areas? For example, Rails specifies a very specific layout that gives you a lot of behind-the-scenes magic for free.
Does your team's workflow suggest a particular arrangement?
What is the basic unit of work for the project? Does it make sense to divide the structure into modules, or is there some other layout that fits the project better? (Models/Views/Controllers, Presentation/Reporting/Business Logic/Database, HTML/Images/JavaScript/Dynamic Content)
What architecture best describes the project? Is this MVC? Is it n-tier? Is it based on some pattern like Factories, Producers/Consumers, etc? Make the layout match the architecture, and when you have to find that database tier code, you'll know right where to go.
Are you working in a compiled language that logically separates compiled code into units? (Think .net assemblies)
Is the code based on some existing framework, or a fork of some existing project? (In which case it makes sense to match the layout of the original.)
Does the business have rules/standards about how to lay out the project?
There are ultimately very few fixed rules to follow. The layout that is best, provided that the language doesn't specify specific rules, is the layout that helps you find and understand the content of the code. If the sock-drawer approach makes this happen for you (and/or your team) then that's the best layout to use. If a modular approach is best for you, then use that. There's also nothing to say that you can't use a "modular sock-drawer" approach, where the code is divided into modules, and each module is divided by functional area.
Because it is the only feasible scheme to general purpose frameworks. In order to make a feature oriented organization, you need to know the system features, which is impossible to be known before you have a specific system to implement using such frameworks.
There is an interesting concept that is related to this preference: convention over configuration. Convention over configuration (also known as coding by convention) is a software design paradigm which seeks to decrease the number of decisions that developers need to make, gaining simplicity, and not necessarily losing flexibility. When the convention implemented by the tool matches the desired behavior, it behaves as expected without having to write configuration files. Only when the desired behavior deviates from the implemented convention is explicit configuration required.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Our product is a web based course managemant system. We have 10+ clients and in future we may get more clients. (Asp.net,SQL Server)
Currently if one of our customers need extra functionality or customised business logic, we will change the db schema and code to meet the needs.
(we only have one branch code base and one database schema)
To make the change wont affect each others route, we use a client flag, which defined in a web config file, thus those extra fields and biz logic only applied to a particular customer's system.
if(ClientId = 'ABC')
{
//DO ABC Stuff
}
else
{
//Normal Route
}
One of our senior colleagues said, in this way, small company like us can save resources on supporting multiple resources.
But what I feel is, this strategy makes our code and database even harder to maintain.
Anyone there crossed similar situation? How do you handle that?
Update:If this is not a right question for SO, can someone move this question to a proper stackexchange site?
Update2:
you are right. The code is becoming smelly now, and I quite sure will be a nightmare sooner or later. Our company is doing the product and to save the effort, later products for other customers are based on the previous one. I know the ideal way is seperate the #e-j-brennan dev teams into 2 parts. One team works on core product and made it highly customisable, and team two works on customising for a particular client. However if since our company is so small, it is really a dilemma situation. :(
I think you need to decide if you sell custom software, that you tailor for each client, or 'off-the-shelf' software that is one-size-fits-all (and maybe customizable thru functionality you provide).
When you only have a handful of clients like you do now, you can get away with what you are doing, but I can almost guarantee that if you continue down this road, and your client base increases and the amount of client-specific customization's increases as well, you will have a nightmare on hand; I've been thru this many times for multiple clients, and it always ends the same way. It all is manageable until it is not, and then it is a royal pain-in-the-neck that could make your life very difficult indeed.
If you decide you are a custom company, and want to have multiple versions of the software and database, that is fine, just make sure you charge the full cost for it - i.e. factor in that you may need to maintain multiple levels of source code and databases and factor in that upgrades are going to take many multiples of effort to rollout as you will need to test each client's code base.
If you decide you want to be an 'off-the-shelf' type of product, then your best bet is provide the ability for each client to customize their experience, without the need for code changes - i.e. built in the customization capability thru config screens and tables that control how things work - but everyone will still use the same underlying code and database. Much more work upfront, but saves you boatloads of time down the road.
I have also been in your position, and I agree it is a difficult one. In my case, I was building custom single-product sites for clients. While each site followed a similar layout and workflow, there had to be enough flexibility for each to have a wholly custom design, custom rules around shipping and coupons, and different merchant gateways and configurations.
After some years, we did end up with something maintainable. First, we created libraries to house all of our common code and put those libraries into a TFS project simply called Common. Then, we created a new TFS project for each site (not client, as many clients had multiple products/sites) and branched the applicable projects into them from Common. Next, we created a VS Template project that contained a skeleton of the site, including "design-less" views, controllers, and their action methods (remember, each site had the same basic flow). Also, each site ran on its own database, which was cloned from an otherwise unused and mostly empty Template DB.
With each site running on its own branch and DB, modifications could be made to the original flow and design that was installed by the template (which would never need to be merged back in) without affecting any other site. For customizing business methods, like shipping calculations, we could create a subclass of the common class and override where needed. Part of what enabled this was converting all our code to use Dependency Injection. Specifically, each Controller had injected Services, and each Service had injected Repositories. Merchant Processing was also coded to an interface and injected. Also worth mentioning is that this allowed us to hard-code all of the upsell logic for each site (you bought product X, so we recommend Y), which was much easier to create and maintain compared to defining complex configuration rules in our old upsell rule engine. I don't know if you have anything like that...
Sometimes we would want to make a change to the Common code itself, which was usually prompted by a specific need for a specific site. In that case, we'd make the change on that branch, merge it to Common, and then merge it to the other sites at our convenience (great for "breaking" changes or changes that also required a change to the DB). Similarly for DB changes, we would update the Template DB and then write a little script to update the other site DBs with the same schema changes ( still had to be smart and careful about it).
An added benefit was that we also created Mock repositories that would be used/injected in a "Design" build configuration, which enabled the designers to jump around the application and work on screens without literally submitting themselves to the workflow. It also allowed them to start working on a site before there was anything done on the back-end, which was very important for those anxious clients who need to "see something".
10+ clients is definitely not a small number with what you're talking about. Three was pain enough for me. We had over 30 sites running at one time, maintained by three developers and two designers.
Finally, I know it's outside the scope of your question and a bit presumptuous, but getting "final" client sign-off on design before the designers actually went about implementing it (and before devs did their thing) also saved us a lot of costly rework. I know no design is final, but increasing efficiency on the implementation end gave the clients less time to change their minds about the design they approved.
I hope that at least gives you some approaches to think about.
People working with systems that have to change or be customized, have developed patterns to handle such concerns.
You should definitely start by reading a good book on Inversion of Control. In short, you can build your systems by defining building blocks (contracts, expressed as interfaces) and provide multiple implementations. There are multiple benefits of such approach but to mention just two:
- you can handle customizations by providing diffent implementations of the same interfaces
- you can reconfigure your application statically or dynamically but both approaches are far more clean than your "if"
When it comes to the data layer, study the repository pattern. It helps to organize the data access in a way that you can switch between different providers. It fits great wiht ioc.
And just a technical tip - nhibernate supports dynamic properties. You just provide additional columns in the mapping and nh is able to support it from the same code base. This way you can target different databases with slightly different db schemas.
I generally use IoC pattern in my projects which are most of the time ASP.net based. Are there any guidelines on how to structure the projects in a general 3 layered project UI+BL+Data Access. I want to know more about how the folders should be created, where should constants be kept at within each layer (I keep all the strings such as query string parameters, stored procedure parameter etc in file named Constants which is singleton). How should I create classes that interact with Data Access layer from Business Layer etc. and all such code structure questions.
Is there any guidance or a book on this?
Microsoft has a plethora of information on this. I've used Microsoft .NET: Architecting Applications for the Enterprise as my bible for software architecture
http://www.amazon.com/Microsoft%C2%AE-NET-Architecting-Applications-Pro-Developer/dp/073562609X
Check out this MSDN guide as well
http://msdn.microsoft.com/en-us/library/ff647095.aspx
Also, take a look at some application frameworks like Sharp Architecture for examples
http://sharparchitecture.net/
A lot of NHibernate tutorials demonstrate software design principles that can be applied to any solution
http://nhforge.org/blogs/nhibernate/archive/2010/04/25/first-three-nhibernate-quickstart-tutorials-available.aspx
#robbymurphy has a great answer. I would only add that I keep most constants and interfaces in a separate project/assembly altogether. I call this my "core" assembly and and define interfaces that allow me to pass data from the top of the stack to the bottom without tightly coupling them.
It is not so much where they are used, but for what purose. I once attended a seminar class where the instructor pounded "high cohesion, low coupling" into our heads, over and over.
Keep those things that, in the real world, belong together, together, but, reduce dependencies between object whenever possible.
This is a cohesion question as well as a coupling issue: if the constants are truly internal to a class, make them private static members (i.e. and internal state enum) . If they are truly internal to a project, create a class for them, and make them internal (a database specific constant in your data layer). Otherwise, put them in a public class in their own project.
The DevExpress XAF does much basis work for you, it creates a database based on your business objects, and dynamically generates a UI based on these, with basic functions like add, delete, sort etc. already present.
This leaves me wondering how to go about properly designing and modelling an application built on this framework. I could only model my business objects, or I could identify functions provided by the framework and include them in a details model down to sequence diagram level, but so much is being done by 'external' calls that I feel I would be wasting valuable time.
I am hoping someone with experience modelling application designs for this specific framework can give me some advice on what areas I should focus on.
As for DC, as Leon mentioned above, it has many benefits compared to the regular persistent classes. If all goes according to plan, we will release the Domain Components technology in the near future, and resolve all the remaining issues with it.
If you feel that it is hard to learn, please let us know the most difficult parts you experienced. We will be glad to review them and possibly make the things easier for you and other users.
P.S.
I apologize for the delay in responding; I was on sick leave. You will receive more timely responses if you post your questions in the DevExpress Support Center.
#ProfK:
Am I correct that you are looking for something like visual designer for your business models?
If so, then I am afraid that XPO (XAF) does not currently provide such a functionality. However, you can use free third-party tools for modeling, such as Liekhus ADO.NET Entity Data Model XAF Extensions
I hope you find this information helpful.
I'm using XAF for almost two years now and I'm very happy with it. Developing an app is very quick, nice architecture, both Win and Web the same time and great UI. As with all frameworks, it has a learning curve, but if your already familiar with DevExpress controls that it's not very hard.
As Dennis mentioned, most behaviour can be overriden or extended. Regarding your modelling question, if think an important choice you have to make is whether or not you will use their Domain Components technology. Basically they have 2 ways: the old fashion way by inheriting from the XAF or XPO base classes or by using DC. DC allows a clean separation in modules and allows multiple inheritance. They can do that by generating classes runtime, but it still has some issues.
And the framework comes with a Business Class Library, a set of common classes which may be useful.
When I get stuck or cannot find the answer myself, I always use their fantastic Support Center. Most issues I ran into were already asked and answer on that site.
Briefly, each XAF application consists of Modules. There can be standard (system) and extra (user-defined) modules. Each Module can contain business objects, so-called Application Model customizations, Editors, Controllers, and Actions to provide additional business logic, customize UI and provide interaction between framework parts. You can model and customize your application on each level listed above, including but not limited by the underlying framework's metadata and data store ones. You can find more information about the framework's architecture here:
http://documentation.devexpress.com/#Xaf/CustomDocument2559
I should emphasize that it is possible to override behavior of almost every part of the framework. For instance, create your own editors for detail and list forms, override certain standard controllers, etc.
If you experience any further difficulties with our framework, feel free to contact us through our Support Center. We will be always glad to not only answer you questions, but advice a certain technical or design solution, provide some example code, etc.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am currently facing a situation where I as an advocate of test driven development have to compete with an advocate of model driven software development (MDSD) / model driven architecture (MDA).
In my opinion, code generation is a valuable tool in my toolbox and I make heavy use of templates and automation when needed. I also create diagrams in UML when I think this helps to understand the inner working or to discuss architecture on the white board. However, I strongly doubt that creating software via UML (creating statecharts and sequence diagrams to create working code not only skeletons of code) is more efficient for multi tier applications (database layer, business/domain layer and a Gui, maybe even distributed). It seems to me when it comes to MDSD, the CASE tooling suddenly isn't just a tool anymore but it is the thing to satisfy: As I see it, on the one hand, MDSDevelopers profit from the higher abstraction UML gives them but at the same time they are struggling with modifing the codegenerator/template/engine to fullfill their needs which might be easily implemented (and tested) if used another tool out of their toolbox (VisualStudio, Eclipse,...).
All this makes me wonder if there has been a success story (suceess being that the product was rolled out in time, within the budged and with only few bugs and parts of the software have been reused later on) for a real world application which fullfills this creteria and has been developed using a strict model driven approach:
it has nothing to do with the the Object Management Group (OMG) or with consultants related to MDSD/MDA/SOA/
the application is not related to Business Process Modelling and is not a CASE tool itself
the application is actively used by end user
it has at least three tiers, including a user interface which goes beyond displaying raw table values and is not one of the common MDA/MDSD examples ("how to model a coffee machine, traffic light, dishwasher").
A tiny, but nevertheless useful testimonial on the use of MDSD has been posted on the Model Driven Software Network:
http://www.modeldrivensoftware.net/profiles/blogs/viva-mdd-follow-up-building-a?xg_source=activity
It is a relatively small app being developed, but still a good example of MDSD in action.
More success stories are listed at Metacase's site (http://www.metacase.com/cases/index.html). Metacase sells MetaEdit+, which implements DSM (Domain-Specific Modeling). DSM is just a form of MDSD.
I am also developing ABSE (Atom-Based Software Engineering), another form of MDSD, very close to DSM. ABSE is outlined at http://www.abse.info.
I used MDA and code generation on an embedded system project using 4 processors connected via CAN. We had over 20 axes of motion and many, many sensors. The system was highly robust and maintainable as the mechanical components were evaluated and modified.
We worked in the models and generated code so the models were always up-to-date. We did a careful domain analysis to achieve subject matter isolation. The motor control required very high performance and so was not modeled or generated. Our network drivers were also hand-coded, and we wrote interfaces that allowed bridge services to send events to any service anywhere in the system as needed (although this was tightly controlled so as to minimize interprocessor dependencies).
Using the method took a bit of discipline, but having working models was great because they can be reviewed by non-software types.
Version control and differencing of the models was a bit of a challenge but we had a small, localized team so we were able to avoid merge issues.
The good people at Pathfinder Solutions (our tool vendor) can help mentor you through the project.
You could also take a look at the slides from previous Code Generation conferences. Several of these talks were from successful case studies e.g. http://www.codegeneration.net/cg2009/slides.php
I am working on one of the project for legacy modernization and its using MDA tool named Bluage. Its for a big healthcare organization and its in production so i could say that its successful. MDA is better in case of legacy modernization as it can generate KDM model from some technologies like pacbase which are going to be out of support.
I worked on a MDSD system that generated admin style web apps in Google Closure. I believe that your question is compelling. Too much complexity and your MDSD system is too hard to use. Too simple and you won't generate apps that are useful in the real world. Where MDSD really shines is in saving developer time typing lots of plumbing style code but how can MDSD remain effective over multiple releases? Requirements can go in many directions. That is the real challenge. I recently blogged about my MDSD lessons learned on that project.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Scrum development is based on listing user stories and implementing them in during sprints. That approach - focusing on actual goals of the end product - definitely has its virtues, but what bugs me is that it doesn't advocate creating any generic/reusable code in the process, and actually I feel like it advocates hacking. For example, if an user story says
Must be able to plot x versus y, and fit a line there.
my first thought is that, "hey, I need to create a generic graphing framework so that I can handle similar cases more efficiently later on". But that's not the goal in the scrum sprint; the goal is simply what the user story says.
So it is more desirable (from Scrum viewpoint) to simply hack something together so that the user story gets implemented, instead of trying to understand the big picture and creating something more generic (which, of course, takes more time initially).
Is this unavoidable? Have I misunderstood something? How do you combine Scrum'ing an actual product with creating something reusable at the same time? Is reusability old-fashioned and overrated?
I would only spend the time building a generic graphing framework when you need to, for the first sprint write something that plots X versus Y. That might be as far as you go with graphing so there would be no need to write a framework.
If in further sprints you need to do more graphing, then create your framework. Work in time to the sprint to allow you to do this.
Generally if you create generic solution without actual need for it you are not following agile approach. You should avoid refactoring in advance. Otherwise it is gold plating where you are adding functionality which is not needed and which is not required by your customer at the moment (priority approach).
But sometimes it can be needed to create reusable component. This usually happens when more than one team plans to use the same component or when custom framework is created separately. In SCRUM you can do this in following approach. The main project which will use the component will become product owner for the component. It will define features which are needed as user stories. Component team will implement those features and provide the component to the main team in the iterative way.
So suppose that you have two projects which expects that they will need component for credit card payments. These two teams collect user stories with priorities and provides them to component team. They will plan together delivery so that component team provides only functionality needed by main teams in current sprint.
As Fermin says, the first time you need something isn't the time to start building a framework. YAGNI: you just build something that plots X vs Y.
Going further, I have found that even the second time you need something, it's still not time to build a framework yet. The problem with frameworks built on one or two use cases is that it's rare they'll actually be useful and generic enough for anything more than those one or two use-cases.
Building general, reusable, code is hard. There is nothing more useless and confusing to another developer coming after you than something that appears to be a framework, but is actually only used by one or two projects and is in fact tightly coupled with those projects.
One of the founding principles of the X Windows System was:
The only thing worse than generalizing from one example is generalizing from no examples at all.
Good advice I'd say!
I think the issues of reusability and code quality lie outside of the team process dimension. Well maybe not entirely, but at least the agile approach does not deal with those. You're free to put in some extra effort to increase the reusability ratio or just quickly hack things together.
You could add some extra fixed time to each sprint to be used explicitly for code review and working on reusability.