I have a DDD application and I am trying to understand where SignalR fits in my layers:
1. Presentation (Angular)
2. Distributed Services (Web API)
3. Application
4. Domain
5. Data
Basically, my SignalR hub notifies clients (Angular web app) when there is new data. For which I run a background service in a background thread that checks the database on an interval and notifies clients when there is new data.
I am inclined to think in this way:
The SignalR hub belongs to the Presentation layer. Given that my presentation project is purely client-side (Angular), I would add a new project under Presentation just for the hub.
The background service that checks the database on an interval seems appropriate for the Application layer. I would inject an INotify interface with a Notify method, which I would implement with SignalR.
Is this along the DDD principles?
DDD is all about ensuring that changes to your data only happen in a well-defined way, and where the code that executes those changes is defined in terms from a Ubiquitious Language which is well-understood throughout the whole business (not just the dev team).
DDD is silent on the mechanism used to interface with your users and other systems, other than recommending a layered architecture - which it sounds like you're already doing.
So - I wouldn't worry too much about DDD here - but it is worth considering your overall architectural approach - and in terms of layered architectural patterns, one that matches well to your approach is called Ports & Adaptors or Onion architecture. (see 1 and 2)
In this architecture, the outside of your system is considered as a set of adaptors that adapt between specific technology and your application layer. In your case your WebAPI layer is an example of an adaptor.
I would recommend creating a new SignalR adaptor - you can consider it at the same 'level' as the WebAPI adaptor (although in ports and adaptor parlance it's an 'output' adaptor, so you might diagram it on the bottom right of the onion).
In terms of the location of your background process - personally I would not consider that a part of the application layer, as it does not guide any use cases or process flows in your application. So, you could put it in your SignalR adaptor, or create a new dedicated component for it.
That said, you may find another concept from DDD useful - DomainEvents - they could remove the need for the background thread altogether. In your SignalR adaptor, include event handlers that register to handle DomainEvents, and in those handlers, propagate the information about the event via SignalR to your client side presentation layer - no need to poll the database at all! (Warning - depending on your domain event implementation, you may need to consider the risk of advertising events via SignalR before the aggregate is successfully persisted... but that's a whole 'nother topic.)
Related
So my question is very much related to this one: Entity persitance inside Domain Events using a repository and Entity Framework?
EDIT: A much better discussion on the topic is also here: Where to raise persistence-dependent domain events - service, repository, or UI?
However my question is rather more simple and technical, assuming that I'm taking the right approach.
Let's suppose I have the following projects:
MyDomainLayer -> very simple classes, Persitence Ignorance, a.k.a POCOs
MyInfrastructureLayer -> includes code for repositories, Entity Framework
MyApplicationLayer -> includes ASP.Net MVC controllers
MyServicesLayer -> WCF-related code
MyWebApplication -> ASP.Net MVC (Views, Scripts, etc)
When an event is raised (for example a group membership has been granted),
then two things should be done (in two different layers):
To Persist data (insert a new group membership record in the DB)
To Create a notification for the involved users (UI related)
I'll take a simple example of the last reference I wrote in the introduction:
The domain layer has the following code:
public void ChangeStatus(OrderStatus status)
{
// change status
this.Status = status;
DomainEvent.Raise(new OrderStatusChanged { OrderId = Id, Status = status });
}
Let's assume the vent handler is in MyApplicationLayer (to be able to talk to the Services Layer).
It has the following code:
DomainEvent.Register<OrderStatusChanged>(x => orderStatusChanged = x);
How does the wire-in happen? I guess is with structuremap, but how does this wire-in code looks exactly?
First, your layering isn't exactly right. Corrections:
Application Layer - ASP.NET MVC controllers are normally thought of as forming an adapter between your application layer and HTTP/HTML. Therefore, the controllers aren't themselves part of the application layer. What belongs in application layer are application services.
MyServicesLayer - WCF-related code. WCF implemented services are adapters in the hexagonal architecture referenced by Dennis Traub.
MyWebApplication - ASP.Net MVC (Views, Scripts, etc). Again, this forms an adapter in a hexagonal architecture. MVC controllers belong here as well - effectively they are implementation detail of this adapter. This layer is very similar to service layer implemented with WCF.
Next, you describe 2 things that should happen in response to an event. Persistence is usually achieved with committing a unit of work within a a transaction, not as a handler in response to an event. Also, notifications should be made after persistence is complete, or in other words after the transaction is committed. This is best done in an eventually consistent manner that is outside of the unit of work that generated the domain event in the first place.
For specifics on how to implement a domain event pub/sub system take a look here.
My first recommendation, get rid of the notion of Layers and make yourself familiar with the concept of a Hexagonal Architecture a.k.a. Ports and Adapters.
With this approach it is much easier to understand how the domain model can stay independent of any of the surrounding concerns. Basically that is object-orientation on an architectural level. Layers are procedural.
For your specific problem, you might create a project containing the event handlers that project events into the database. These handlers can have direct access to the database or go through an ORM. You probably won't need any repositories there since the events should contain all information that's needed.
I have an ASP.NET MVC 3 application that I am currently working on. I am implementing a service layer, which contains the business logic, and which is utilized by the controllers. The services themselves utilize repositories for data access, and the repositories use entity framework to talk to the database.
So top to bottom is: Controller > Service Layer > Repository (each service layer depends on a single injectable repository) > Entity Framework > Single Database.
I am finding myself making items such as UserService, EventService, PaymentService, etc.
In the service layer, I'll have functions such as:
ChargePaymentCard(int cardId, decimal amount) (part of
PaymentService)
ActivateEvent(int eventId) (part of EventService)
SendValidationEmail(int userId) (part of UserService)
Also, as an example of a second place I am using this, I have another simple console application that runs as a scheduled task, which utilizes one of these services. There is also an upcoming second web application that will need to use multiple of these services.
Further, I would like to keep us open to splitting things up (such as our single database) and to moving to a service-oriented architecture down the road, and breaking some of these out into web services (conceivably even to be used by non-.NET apps some day). I've been keeping my eyes open for steps that might make make the leap to SOA less painful down the road.
I have started down the path of creating a separate assembly (DLL) for each service, but am wondering if I have started down the wrong path. I'm trying to be flexible and keep things loosely coupled, but is this path helping me any (towards SOA or in general), or just adding complexity? Should I instead by creating a single assembly/dll, containing my entire service layer, and use that single assembly wherever any services need to be used?
I'm not sure the implications of the path(s) I'm starting down, so any help on this would be appreciated!
IMO - answer is it depends on a lot of factors of your application.
Assuming that you are building a non-trivial application (i.e. is not a college/hobby project to learn SOA):
User Service / Event Service / Payment Service
-- Create its own DLL & expose it as a WCF service if there are more than one applications using this service and if it is too much risk to share the DLL to different application
-- These services should not have inter-dependencies between each other & should focus on their individual area
-- Note: these services might share some common services like logging, authentication, data access etc.
Create a Composition Service
-- This service will do the composition of calls across all the other service
-- For example: if you have an Order placed & the business flow is that Order Placed > Confirm User Exists (User Service) > Raise an OrderPlaced event (Event Service) > Confirm Payment (Payment Service)
-- All such composition of service calls can be handled in this layer
-- Again, depending on the environment, you might choose to expose this service as its own DLL and/or expose it as a WCF
-- Note: this is the only service which will share the references to other services & will be the single point of composition
Now - with this layout - you will have options to call a service directly, if you want to interact with that service alone & you will need to call the composition service if you need a business work flow where different services need to be composed to complete the transaction.
As a starting point, I would recommend that you go through any of the books on SOA architecture - it will help clear a lot of concepts.
I tried to be as short as possible to keep this answer meaningful, there are tons of ways of doing the same thing, this is just one of the possible ways.
HTH.
Having one DLL per service sounds like a bad idea. According to Microsoft, you'd want to have one large assembly over multiple smaller ones due to performance implications (from here via this post).
I would split your base or core services into a separate project and keep most (if not all) services in it. Depending on your needs you may have services that only make sense in the context of a web project or a console app and not anywhere else. Those services should not be part of the "core" service layer and should reside in their appropriate projects.
It is better to separate the services from the consumers. In our peojects we have two levels of separation. We used to group all the service interfaces into one Visual Studio project. All the Service Implementations are grouped into another project.
The consumer of the services needs to reference two dll but it makes the solution more maintainable and scalable. We can have multiple implementations of the services.
For e.g. the service interface can define a contract for WebSearch in the interfaces project. And there can be multiple implementations of the WebSearch through different search service providers like Google search, Bing search, Yahoo search etc.
I've read lots of articles about Service Oriented architecture.
Is there any real world sample application which is imeplemented in ASP.NET?
Thanks
The short answers is: not that I know of.
The other thing to keep in mind (which you're probably already aware of) is that the level of abstraction is very important.
One one level, the "Service" in SOA is Business Service - not a technicial service like a web service; in fact at this level the idea of implementation is completely irrelevant. This is more at the Enterprise Architecture and Buisness Architecture level.
Lower-down, there's what you might call Service Orientated Design, where software systems are built in a way that is service based - it offers something that is easily consumed by other systems (or consumes a service in much the same way). Even at this point we're not talking about implementation specific things like technologu - it's just more of a mindset - how the system is arranged (the architecture).
The next level down is where software systems offer services as physical end-points that are defined by an address, binding and contract (The ABC of SOA).
At this level you will be able to find implementations; NServiceBus comes to mind (not that I have used it) - but you don't need a service bus to do "Service" orientated architecture.
Finally, I'm not sure exactly how you view ASP.NET in the context of your question. If you're .Net based then WCF is the place to start looking; one of the binding types is a web-service, which being web-based kind of comes under the umbrella of ASP.NET. Alternatively if you're building a website or web-application then the services that the application offers or consumes would be located in a data access or services layer - loosely-coupled to the Business Logic (BL) layer - so they aren't actually directly related to the fact you're doing a web-application at all, as this architecture woudl work for different kinds of application (not just web).
In an effort to understand MVC 2 and attempt to get my company to adopt it as a viable platform for future development, I have been doing a lot of reading lately. Having worked with ASP.NET pretty exclusively for the past few years, I had some catching up to do.
Currently, I understand the repository pattern, models, controllers, data annotations, etc. But there is one thing that is keeping me from completely understanding enough to start work on a reference application.
The first is the Service Layer Pattern. I have read many blog posts and questions here on Stack Overflow, but I still don't completely understand the purpose of this pattern. I watched the entire video series at MVCCentral on the Golf Tracker Application and also looked at the demo code he posted and it looks to me like the service layer is just another wrapper around the repository pattern that doesn't perform any work at all.
I also read this post: http://www.asp.net/Learn/mvc/tutorial-38-cs.aspx and it seemed to somewhat answer my question, however, if you are using data annotations to perform your validation, this seems unnecessary.
I have looked for demonstrations, posts, etc. but I can't seem to find anything that simply explains the pattern and gives me compelling evidence to use it.
Can someone please provide me with a 2nd grade (ok, maybe 5th grade) reason to use this pattern, what I would lose if I don't, and what I gain if I do?
In a MVC pattern you have responsibilities separated between the 3 players: Model, View and Controller.
The Model is responsible for doing the business stuff, the View presents the results of the business (providing also input to the business from the user) while the Controller acts like the glue between the Model and the View, separating the inner workings of each from the other.
The Model is usually backed up by a database so you have some DAOs accessing that. Your business does some...well... business and stores or retrieves data in/from the database.
But who coordinates the DAOs? The Controller? No! The Model should.
Enter the Service layer. The Service layer will provide high service to the controller and will manage other (lower level) players (DAOs, other services etc) behind the scenes. It contains the business logic of your app.
What happens if you don't use it?
You will have to put the business logic somewhere and the victim is usually the controller.
If the controller is web centric it will have to receive its input and provide response as HTTP requests, responses. But what if I want to call my app (and get access to the business it provides) from a Windows application which communicates with RPC or some other thing? What then?
Well, you will have to rewrite the controller and make the logic client agnostic. But with the Service layer you already have that. Yyou don't need to rewrite things.
The service layer provides communication with DTOs which are not tied to a specific controller implementation. If the controller (no matter what type of controller) provides the appropriate data (no mater the source) your service layer will do its thing providing a service to the caller and hiding the caller from all responsibilities of the business logic involved.
I have to say I agree with dpb with the above, the wrapper i.e. Service Layer is reusable, mockable, I am currently in the process of including this layer inside my app... here are some of the issues/ requirements I am pondering over (very quickly :p ) that could be off help to youeself...
1. Multiple portals (e.g. Bloggers portal, client portal, internal portal) which will be needed to be accessed by many different users. They all must be separate ASP.NET MVC Applications (an important requirement)
2. Within the apps themselves some calls to the database will be similar, the methods and the way the data is handled from the Repository layer. Without doubt some controllers from each module/ portal will make exactly or an overloaded version of the same call, hence a possible need for a service layer (code to interfaces) which I will then compile in a separate class project.
3.If I create a separate class project for my service layer I may need to do the same for the Data Layer or combine it with the Service Layer and keep the model away from the Web project itself. At least this way as my project grows I can throw out the data access layer (i.e. LinqToSql -> NHibernate), or a team member can without working on any code in any other project. The downside could be they could blow everything up lol...
I was wondering if any of you had successfully implemented DDD in a Client/Server app and would like to share some experiences.
We are currently working on a smart client in Flex and a backend in Java. On the server we have a service layer exposed to the client that offers CRUD operations amongst some other service methods. I understand that in DDD these services should be repositories and services should be used to handle use cases that do not fit inside a repository. Right now, we mimic these services on the client behind an interface and inject implementations (Webservices, RMI, etc) via an IoC container.
So some questions arise:
should the server expose repositories to the client or do we need to have some sort of a facade (that is able to handle security for instance)
should the client implement repositories (and DDD in general?) knowing that in the client, most of the logic is view related and real business logic lives on the server. All communication with the server happens asynchronously and we have a single threaded programming model on the client.
how about mapping client to server objects and vice versa? We tried DTO's but reverted back to exposing the state of our objects and mapping directly to them. I know this is considered bad practice, but it saves us an incredible amount of time)
In general I think a new generation of applications is coming with the growth of Flex, Silverlight, JavaFX and I'm curious how DDD fits into this.
I would not expose repositories directly to the client. The first big problem as you mention is security: you can't trust the client, so you cannot expose your data access API to potentially hostile clients.
Wrap your repositories with services on the server and create a thin delegate layer in the client that handles the remote communication.
Exposing your Entities is not necessarily a bad practice it's just that it becomes problematic when you start to factor in things like lazy loading, sending data over the wire the client doesn't need, etc. If you write a DTO class which wraps one or more entities and delegates get/set calls you can actually build up a DTO layer pretty quickly, especially using code generation available in most IDEs.
The key to all of this is that a set of patterns should really only apply to a part of your application, not to the whole thing. The fact that you have rich logic in your domain model and use repositories for data access as part of DDD should not influence the client in any way. Conceptually the RIAs that I build have three layers:
Client uses something like MVC, MVP or MVVM to present the UI. The Model layer eventually calls into...
What I might call the "Integration Layer." This is a contract of services and data objects that exist on both the client and server to allow the two to coordinate. Usually the UI design drives this layer so that (A) only the data that the client needs is passed to it and (B) data access can be coarse-grained, i.e. "make one method call for all the state needed for this set of UI.
Server using whatever it wants to handle business logic and data access. This might be DDD or something a little more old school, like a data layer built using stored procs in the DB and a lot of "ResultSet" or "DataTable" objects.
The point really is that the client and server are both very different animals and they need to vary independently. In order to do so, you need a layer inbetween that is a fair compromise between the needs of the UI and the reality of how things might need to be on the server.
The one big advantage that Silverlight/WPF and JavaFX have over Flex + anything is that you can use a lot of logic in the first two because you have the same VM on both sides of the app. Flex is the best UI technology hands down but it lacks a server component where code could be shared and re-used more effectively.