Clean Architecture (DDD) Why are domain objects (DB Entitites) and DbContext in separate projects? - ardalis-cleanarchitecture

I understand the need for abstraction and separating concerns and unit tests, however, it seems to me that separating entities and context into 2 projects is slight overengineering?
I could be missing something really, but is this because you want to be open for different ORM-s?
Much thanks for the clarification.

The main reason I prefer to have Infrastructure in a separate project, rather than just a separate folder, from the domain model (Core project) is simple: enforcing my design via the compiler.
I have a design rule, which is basically the Dependency Inversion Principle. Don't depend on low level implementations (such as those found in Infrastructure), instead depend on abstractions (interfaces). Also, don't have your abstractions depend on details; have details depend on abstractions. The details of how and which infrastructure is being used for a given abstraction are in the Infrastructure service implementations.
Abstractions say what; implementations say how.
What: I need to send an email.
ISendEmail interface
How: I want to do it using the SMTP protcol
SmtpEmailSender class (implements ISendEmail)
How: I want to do it using a SendGrid API
SendGridEmailSender class (implements ISendEmail)
So, in a single project, how would you ensure that the implementations depend on the interfaces, and not vice versa?
How would you ensure your domain classes didn't directly reference or use Infrastructure types?
I'm not aware of a way to do this.
But if you put them in separate projects, and you have the implementation details project depend on the abstractions-and-models project, you now have solved the problem. The compiler WILL NOT ALLOW the Core project to reference anything in the Infrastructure project, because it would create a circular dependency.
This constraint helps developers do the right thing and keeps them falling into the pit of success even if they don't completely grok how the dependency inversion principle works or why it's important.
And I've never found 3 projects (Core/Infra/UI) to be overengineering for any non-demo app I've built for real work. It's only 3 projects.

Related

Usage of Dependency Injection other than writing unit test friendly programs

What is the any use of Dependency Injectors except writing unit test friendly programs?
I have used it in several projects and I like this approach. However I was wondering what is the real use of this pattern? Give me just one use but with proper explanation and code if possible.
Plenty of information if you Google it. From Wikipedia:
Advantages
Because dependency injection doesn't require any change in code behavior it can be applied to legacy code as a refactoring. The result is more independent clients that are easier to unit test in isolation using stubs or mock objects that simulate other objects not
under test. This ease of testing is often the first benefit noticed when using dependency injection.
Dependency injection allows a client to remove all knowledge of a concrete implementation that it needs to use. This helps isolate the client from the impact of design changes and defects. It promotes reusability, testability and maintainability.
Dependency injection can be used to externalize a system's configuration details into configuration files allowing the system to be reconfigured without recompilation. Separate configurations can be written for different situations that require different implementations of components. This includes, but is not limited to, testing.
Reduction of boilerplate code in the application objects since all work to initialize or set up dependencies is handled by a provider component.
Dependency injection allows concurrent or independent development. Two developers can independently develop classes that use each other, while only needing to know the interface the classes will communicate through. Plugins are often developed by third party shops that never even talk to the developers who created the product that uses the plugins.

Modular Software Design

I am trying to implement modular design in an asp.net project dividing the application into different modules like HR, Inventory Management System etc. Since I am trying to keep different modules independent of each other, I separated these modules in such a way that each module is a separate Visual studio solution having UI, BLL, DAL and even a separate database schema.
Up till now I thought this as a common practice for developing Management systems and ERPs but I am searching the web for last three days but hardly found any help full stuff regarding developing modular applications. Most of what I found is mere theory explaining the concepts of cohesion and coupling but not real world scenarios. So I wonder
Is it the right approach of separating modules?
How the real world modular applications are developed?
How should the different modules communicate with each other yet they stay independent of each other.
I think there should be a core application which makes use of these modules, how should the core application communicate with these modules?
There is some data, entities , objects which are common to each module, should I put them in the core modules in order for other modules to use them (I think this will make the modules coupled to core) or should every modules maintain its own copy of data + define those object, (which I think voilates DRY)
Any thoughts, links are warmly welcome.
This is a personal opinion and is debatable.
I separated these modules in such a way that each module is a separate Visual studio solution having UI, BLL, DAL and even a separate database schema.
Sounds like a total overkill. Abstraction over abstraction makes your application pain in the neck to maintain, support, and enhance. Is it that large that you need to separate modules into separate solutions?
Is it the right approach of separating modules?
No, I think it is a total over-engineering. I would suggest using projects to separate modules. And not separate solutions. The problem with solution is that it will require external dependencies management tool, which requires a lot of effort to bring in and later maintain.
How the real world modular applications are developed?
Using abstraction (interfaces and abstract classes) and separate projects.
How should the different modules communicate with each other yet they stay independent of each other.
By using interfaces, DI, IOC, TDD
I think there should be a core application which makes use of these modules, how should the core application communicate with these modules?
Core does not communicate with modules. In fact it should ideally not depend on any other project/library. This makes it simple to reference and use in large solutions.
There is some data, entities , objects which are common to each module, should I put them in the core modules in order for other modules to use them (I think this will make the modules coupled to core) or should every modules maintain its own copy of data + define those object, (which I think voilates DRY)
I would highly recommend using a single copy from the Core project. See this questions for details of why.
This is one of those topics that is entirely subjective for the most part, but you may wish to consider a SOA (Service Oriented Architecture).
Using SOA, you can define a service (for this example, I'll stick to web services, though other service types exist depending on requirements) for each business area - an HR web service, a projects web service, a finance web service and so forth.
You can then bring all these together with a front end system that will communicate with and utilise these services, that would normally be your core application, though depending on your needs and requirements you may opt for multiple front end systems.
For the front end system I would recommend using ASP.NET MVC which has the concept of areas and will let you separate the front end into specific areas - an HR area, a projects area, a finance area and so forth that will contain the models and views for each specific area.
Doing this will let you build in a modular manner, you can build your first web service, say, the HR web service, that has methods for getting relevant HR data and so forth, and then build the HR area of your MVC application. Expanding then simply depends on building the web service, and creating the front end in the MVC application. There is nothing stopping say the HR area then accessing the finance web service if it needs finance information, but it still keeps everything in distinct independent modules.
Using this method can also be helpful in aiding future interoperability - it may be that other systems in the company will find it useful to interact with certain web services. For example, in a previous role it was useful for the companies engineering software to integrate with the projects team web service as it allowed for engineering related information to be linked to it's related project.
If the system grows in terms of resource requirements it should also be fairly scalable as it is trivial to say, offload the projects web service to another service if it starts eating a lot of system resources. It also allows you to switch modules out if need be - if you ever decided to move to say, a Linux/Java platform, you could trivially move by porting module by module with no real interruption of the overall system.
But of course, as I say, this is simply one such option and much of it depends on the specifics of your circumstances.
It is too late to answer but it seems interesting.
Since I am trying to keep different modules independent of each other, I separated these modules in such a way that each module is a separate Visual studio solution having UI, BLL, DAL and even a separate database schema.
It depends on your scale of application. If you create a very small-simple application with a little functionality, then it is safe to has a combined assembly. Or if you want, just separate the UI with other module. At least it can help you to emphasize SOC. Keep in mind that loading multiple assembly can be slower than a single assembly.
Is it the right approach of separating modules?
Module separation always has a drawback, that it is require mapping. It means slower performance in general (maybe negligible, but still there is), and slower development time. If your application will be large and complex enough, it is worth it, since you can create modular unit tests for each module.
How the real world modular applications are developed?
No exact practice though, every problem needs a solution. You won't need a heavy multi-threading or dependency injection architecture for a simple calculator application.
How should the different modules communicate with each other yet they stay independent of each other.
Using interface. You can make the implementation different later on. Example is, you currently use C# Winform for your application, communicate to the BLL using interface. Later on, you want to migrate to ASP.Net, then you just change the implementation, but keep the interface to communicate with the BLL the same.
I think there should be a core application which makes use of these modules, how should the core application communicate with these modules?
There is some data, entities , objects which are common to each module, should I put them in the core modules in order for other modules to use them (I think this will make the modules coupled to core) or should every modules maintain its own copy of data + define those object, (which I think voilates DRY)
I assume it is an enterprise level application which share the same modules / data such as employee. If it is really need to behave uniformly, then you should provide the very basic logic at the core Level. At the application / implementation level, you may has different implementation to fulfill each requirement.
Do not force to uniform all of the business logic to the core. If a specific application need a different implementation, it is hard to make the core configurable.

New Prism Project - Use MEF or Unity?

I'm starting a new personal Prism 4 project. The Reference Implementation currently uses Unity.
I'd like to know if I should use MEF instead, or just keep to Unity.
I know a few discussions have mentioned that these two are different, and they do overlap, but will I be missing out if I simply choose Unity all the way?
Also check out the documentation:
Key Decision: Choosing a Dependency Injection Container
The Prism Library provides two options
for dependency injection containers:
Unity or MEF. Prism is extensible,
thereby allowing other containers to
be used instead with a little bit of
work. Both Unity and MEF provide the
same basic functionality for
dependency injection, even though they
work very differently.
Some of the capabilities provided by both containers include the following:
They both register types with the container.
They both register instances with the container.
They both imperatively create instances of registered types.
They both inject instances of registered types into constructors.
They both inject instances of registered types into properties.
They both have declarative attributes for marking types and dependencies that need to be managed.
They both resolve dependencies in an object graph.
Unity provides several
capabilities that MEF does not:
It resolves concrete types without registration.
It resolves open generics.
It uses interception to capture calls to objects and add additional functionality to the target object.
MEF provides several
capabilities that Unity does not:
It discovers assemblies in a directory.
It uses XAP file download and assembly discovery.
It recomposes properties and collections as new types are discovered.
It automatically exports derived types.
It is deployed with the .NET Framework.
I am currently doing the same investigation. I was last week attending the p&p symposium at Redmond. I had the chance to chat with some of the p&p people on that.
MEF
+Part of .net, no need for extra libraries
+Very powerful in extensibility, modularity scenarios
-More generic approach, less flexible for DI scenarios
-You need to decorate with attributes, your code is glued to MEF
Unity
+Very flexible for DI scenarios
+If you stick with ctor injection and avoid using named instances then you
don't need to use any attributes. Most
of your system doesn't rely on Unity
-No out of the box support for extensibility, modularity scenarios
-Need to deploy the 3rdparty libraries
What I think is a good idea is to use MEF for extensibility (manage the modules of your app, localize registrations) and use Unity for DI.
Well this has to be clear that MEF implements Inversion of control but it is not a part of it, so this means that they are not same, there is a difference, that we use unity when we have static dependency and MEF provides us with dynamic dependency.
MEF also provides us with extensibility, by which we can induce a port type mechanism and can also specigy the type of component which can interact via that port.
more can be understood from: MSDN Document

How to structure a utility/companion project in a multi-project solution

Let's say I have a Visual Studio solution with a Web project, a BLL project, and a DAL project. I'm trying to follow the repository pattern keeping my SQL code in the DAL with an interface that is referenced by the BLL.
I have a handful of common solutions for things such as error handling, usage logging, and other things that can be considered utility functions (i.e. not in the business spec). I'm keeping these in a Common project.
Here are a few ideas I've had with regards to structuring the Common project...
Bundle SQL with logic in a given class
Create a layered solution within the Common project
Discard the Common project and put utility functions in with BLL/DAL
Is one of these ideas better/worse than the other? Does anyone have a better solution?
It's worth noting that these utility functions will be reused in a variety of other applications.
Instead of creating a Utilities project which will be used have you thought about creating something that can provide a service? You might want to look at Aspect Oriented Programming. Red flags went up when I saw you listing off your examples error handling, logging, etc. Those scream AOP.
But if you want to stick with your layout.
I think I would go with 2, assuming that means restructuring the utilities project to be more Cohesive.
I don't understand (please clarify and I will edit my post)
Bundle SQL with logic in a given class
As for:
Discard the Common project and put utility functions in with BLL/DAL
I would be against doing so. If this logic is truly going to be repeated there is no need to push it back into your projects. This will lead to duplicate code and increased maintenance.
Side Note:
Just as a lessons learned, the only way Utilities projects work, are if you are the only developer or it is well documented and well designed. Sometimes utilities are too programmer specific, or are written in a way that only benefits a particular coders style.
I have seen countless times people rework their infrastructure pulling out all kinds of utilities, only to see their utilities project never get used. Make sure the "utilities" you are creating are truly useful to other people.

What are the benefits of Spring Actionscript considering Dynamic Proxies and Reflection is limited

What are the benefits of Spring Actionscript considering Dynamic Proxies are not possible in the current version of Actionscript and Reflection is quite limited.
So for example I could specify my object creation in an XML application context, but why would I do that when I can simply specify that in code, and hence take advantage of static type checking etc.
It is by no means my intent to belittle the work done on Spring Actionscript but more to find an application for it in my projects.
Besides XML configuration, Spring ActionScript also supports MXML configuration. The type of config (XML, MXML) depends on the use cases your application needs to support. For the reasons you mention, it makes perfect sense to configure most of the context in MXML, but I would encourage you to externalize the config of service endpoints in every case.
In a past project we opted for XML config since the configuration was generated at runtime when a user logged on to the application. Depending on the user credentials, different endpoints and various different settings were used. We could not have done this elegantly with static MXML configs.
Both config types have their strengths and weaknesses, and it's up to you to decide what type you want to use. I think we could even support a mixture of MXML and XML quite easily actually if that would make sense. As soon as we have Dynamic Proxies and class loading, XML config will make a lot more sense.
I would agree with Sean in the general sense that trying to force Flex inside of the Java box is generally a bad idea. As many similarities as there are, Flex is not Java.
That being said, there are plenty of reasons why you might want to have some of your configuration in an external XML file, not the least of which is in the use case of configuring your service destinations and endpoints, where you may have a need to be able to change the endpoint URI without having to recompile your application.
There are several projects available that are simply misguided ports of philosophies from other platforms. Whenever starting in on a new platform, I think the best thing to do is figure out how people are effectively developing and go from there.
I say all of that because I think all of the java-esque frameworks for flex/flash leave you worse off than you started. You do need dependency injection, but there are good as3/mxml-friendly frameworks for that (Mate, Swiz). There is absolutely no point in using xml when you can use mxml, which is strongly typed.

Resources