Asp.Net MVC and Entity Framework Architecture - asp.net

I'm working on a fairly large project at the moment and am currently in the planning stages. I've done a lot of reading into the various patterns suggested for development, somthing that has split the team at the moment is when using Entity Framework should the classes be passed through the applciation layers so that a view accepts an Entity Framework class or should these classes be mapped to BLL Classes and if so at which point (Controller or Library) should this be done?
I'm interested in hearing some positives and negitives for each solutions.

This is one of those great "it depends" questions ....
For me it's a matter of pragmatism. I use the raw entity classes where ever I can for expediency. I start using DTOs when either the object graph in question starts becoming too cumbersome or the object in question has sensitive data I don't want sent over the wire.

This is again one of those questions that doesn't really have a right or wrong answer, its personal taste really. Personally I would opt for using DTO's or interfaces when passing data to the Views. I don't tend to pass around entity objects to different layers of my application they are strictly confined to the DAL, or if I do need to pass it up a layer I would almost always use an interface never the concrete type.

Related

What is the service layer in an ASP.NET project?

I'm having problems to understand the conception of DDD. I have an ASP.NET project with this structure:
ASP.NET MVC4 project: xxx.UI.Web
Class Library project: xxx.Application xxx.Domain xxx.Infra.EF
I'm trying to keep this relation:
xxx.UI.Web only have relation with xxx.Domain and xxx.Application
xxx.Domain doesn't have relations.
xxx.Application have relation with xxx.Domain and xxx.Infra.EF
xxx.Infra.EF have relation with xxx.Domain
But now I'm having many problems to keep this concepts with Entity Framework. I have created the Entity repositories in the xxx.Infra.EF and created a generic repository (with interfaces and so on) in xxx.Application.
The problems begin when I need to pass a personalized Entity context to my repositories, because I use the repositories in the xxx.UI.Web and I can't instantiate a new Entity context because It will broke my project pattern (The Entity context comes from xxx.Infra.EF).
My idea is to create many helper methods that will process this kind of operations for my xxx.UI.Web, and I wouldn't like to create this methods in xxx.Application (It looks a little strange to create many methods with a little relation with my business logic).
So I was reading a little about Domain Driven Development (DDD) and I knew about the Service layer, and I think It seems to be the layer that was created to solve problems like this, or not?
My idea is to create a new class library project called xxx.Service and make this project keep relation with xxx.Domain and xxx.Infra.EF. Is It right? I know that I could search for another solutions for my case with Entity context, but I guess I'll have more problems like this in the future with other things, so I tried to find solutions for It. I should study much more about It, but I think I could identify the solution for my problem.
I suggest you further study the concept of layered architecture.
You are correct that the Domain should not reference other layers, such as the Presentation Layer and Infrastructure Layer.
The most common style of layered architecture is the "Onion Architecture". See image and link below.
http://jeffreypalermo.com/blog/the-onion-architecture-part-1/
The Service Layer
You are also correct that a new layer (Service Layer) will solve your problem. You see, the problem arises because the UI/Presentation Layer is talking directly to the repositories(Infrastructure).
In our approach, the Presentation Layer does not communicate directly with the Infrastructure and Domain. We have a Service layer that comes between the Presentation and the Domain.
The above architecture borrows heavily from the Onion Architecture. We don't have domain services. Application Core is our CrossCutting Layer.
People often misunderstand the point of a multi-tiered application. The goal is not necessarily to remove dependencies, but rather to encapsulate and modularize code. Your MVC frontend shouldn't need to be concerned with your entities and how their queried, added, updated, etc., but it doesn't mean it doesn't need access to them. Long and short, don't focus on project references, but rather factoring out code that is not in each project's "domain" into a more appropriate "domain".
To that end, it's not really possible to say based on the information you provided exactly what you should do. The question is opinionated to begin with and will very likely end up being closed as a result. Ultimately, you have to decide what's best for your project. Do what makes sense, rather than blindly following some pattern. After all, patterns are supposed to just be a codification of common sense approaches to common problems, anyways.

Guidelines on structuring code for .net projects

I generally use IoC pattern in my projects which are most of the time ASP.net based. Are there any guidelines on how to structure the projects in a general 3 layered project UI+BL+Data Access. I want to know more about how the folders should be created, where should constants be kept at within each layer (I keep all the strings such as query string parameters, stored procedure parameter etc in file named Constants which is singleton). How should I create classes that interact with Data Access layer from Business Layer etc. and all such code structure questions.
Is there any guidance or a book on this?
Microsoft has a plethora of information on this. I've used Microsoft .NET: Architecting Applications for the Enterprise as my bible for software architecture
http://www.amazon.com/Microsoft%C2%AE-NET-Architecting-Applications-Pro-Developer/dp/073562609X
Check out this MSDN guide as well
http://msdn.microsoft.com/en-us/library/ff647095.aspx
Also, take a look at some application frameworks like Sharp Architecture for examples
http://sharparchitecture.net/
A lot of NHibernate tutorials demonstrate software design principles that can be applied to any solution
http://nhforge.org/blogs/nhibernate/archive/2010/04/25/first-three-nhibernate-quickstart-tutorials-available.aspx
#robbymurphy has a great answer. I would only add that I keep most constants and interfaces in a separate project/assembly altogether. I call this my "core" assembly and and define interfaces that allow me to pass data from the top of the stack to the bottom without tightly coupling them.
It is not so much where they are used, but for what purose. I once attended a seminar class where the instructor pounded "high cohesion, low coupling" into our heads, over and over.
Keep those things that, in the real world, belong together, together, but, reduce dependencies between object whenever possible.
This is a cohesion question as well as a coupling issue: if the constants are truly internal to a class, make them private static members (i.e. and internal state enum) . If they are truly internal to a project, create a class for them, and make them internal (a database specific constant in your data layer). Otherwise, put them in a public class in their own project.

Single overloaded method at BLL layer vs BLL methods having one-to-one correspondence with DAL methods

Assume we create 3-tier module, which enables us to display/create/edit articles. Articles are:
• organized into categories
• before article can be published, admin has to approve it by setting Approve field of Articles DB table to true
• by setting MembersOnly field ( Articles table ) admin can also specify whether particular article can be viewed by anybody or only by registered users
• Articles table also has ExpiredDate field, which tells when will the article expire and thus no longer be published
a) At DAL layer we provide methods GetAllArticles, GetArticlesByCategory, GetPublishedArticles and GetPublishedArticlesByCategory for retrieving articles from the DB
At BLL layer we use GetArticles() overloads to call all of the above DAL methods.
What are the some of the benefits of using single overloaded method at BLL layer instead of BLL methods having one-to-one correspondence with DAL methods? Only advantage I can think of is that this way same ObjectDataSource control can call two or more of GetArticles() overloads, depending on the value of parameters, for example:
public static List<Article> GetArticles(bool publishedOnly)
{
if (!publishedOnly)
return GetArticles();
...
}
If you don’t also design UI layer and thus can’t be sure which methods UI programmer will prefer the most, would it be best practice for DLL layer to provide four overloads of GetArticles + GetAllArticles, GetArticlesByCategory, GetPublishedArticles and GetPublishedArticlesByCategory?
2) When designing DAL methods to retrieve data from DB, how can you in advance know/predict ( without first designing the UI ), exactly which methods (for accessing DB) we should create at the DAL layer?
Namely, in the previous example I’ve had several methods for retrieving articles based on number of parameters ( based on category they belong to, whether we only want published articles etc). Assuming I’m selling this module to third party UI developers, then there is no way to know which data access methods they would prefer the most:
a) so should I create as many data access methods as I can think of ( one for getting all articles that are already expired, one for getting all articles that are already expired, but were never published, one for getting all articles that are not published, one for getting all articles that can be view by registered users only… ) ?
b) Even if all three layer are written by myself – should I still create as many data access methods as I can think of?
thanx
EDIT:
A common way to achieve this is to use interfaces to define the behavior of the API.
a) I’m not sure I understand this. Which class should implement this interface? Perhaps DLL class? In other words, if the name of my DLL class is Article, then third party would derive class named ChildArticle from Article, where ChildArticle would also implement this interface? Or did you mean something else?
b) Anyways, as far as I understand it, providing interface ( which declares defines additional DLL methods to retrieve articles from DB ) would also require DAL class to already have appropriate methods defined, which would be called by methods declared in the interface?
To your point, I believe it is a good idea to prefer fewer coarse-grained methods in the BLL to cover all the functionality required by an entire business operation
I’m not familiar with this term, but you’re prob suggesting that we should prefer overloaded GetArticles() over GetAllArticles, GetArticlesByCategory, GetPublishedArticles and GetPublishedArticlesByCategory?
A) The design of an API is strictly related to what it is meant to achieve and by whom it will be used.
In practice, this means that you should know the target audience of your API, and give them only what they need to get the job done.
Unless I personally interview the people that would buy my product, I can generally guess which methods they would find useful, but within that space, there are still any number of possible methods I could define. Thus, how should I know whether they would also find use for, say, GetArticles() overload which retrieves articles that already expired?!
On the other side it is perfectly fine to have many smaller data-centric methods in the DAL to work with specific pieces of data.
If not DLL, should DAL have as many data access methods as I can come up with ( for particular target audience of course )?
SECOND EDIT:
A few extensibility points can be built into the API to obtain a certain degree of flexibility. A common way to achieve this is to use interfaces to define the behavior of the API. This will allow consumers to replace or extend the pieces of the built-in functionality by providing custom implementations.
Assume I create a BLL layer and then provide some additional interfaces, which consumers could implement to extend BLL’s build-in functionality. But for consumers to be able to implement these interfaces, they will need to have access to BLL’s source code, right? But what if I don’t want consumers to view BLL’s source code?
Interfaces should exist between layers. More specifically, classes should interact with classes from other layers exclusively through interfaces
a) So even DAL’s built-in functionality should be exposed through interfaces? But why? Namely, if we’d use abstract class instead of interfaces, then this class could already implement some utility functions common to all provider classes that inherit from this abstract class?! On the other hand, if DAL uses interfaces instead, then utility functions common to all providers will have to be implemented once for each provider, which could mean a lot of redundant coding?!
b) Anyways, I don’t quite see the benefits ( except when we provide interfaces with which consumers could extend the basic functionality ) in having classes from different layers interacting through interfaces?
For added clarity, instead of overloading methods to work with different parameters, I believe it is better to have one method that accepts a single parameter. This parameter would be an object containing all the data for the method to work with. Some of that data could be required, some could be optional and would influence the effect of the operation.
If I know UI will extensively make use Object Data Source controls, should I still prefer BLL to define a single method (this method having as parameter an object with all the data for the method to work with) instead of method overloads?
cheers mate
I took the liberty to summarize your post in two main questions. I hope I managed to capture the essence of what you are asking.
Q) What is the relationship between the intefaces exposed by the DAL and the ones exposed by the BLL?
A) The BLL is an outward-facing API, and as such it should implement functionality that is useful to the external consumers of the application and expose it in a way that makes sense to them.
The DAL, on the contrary, is a inward-facing API that exposes functionality to retrieve and persist data in way that hides the details of the storage mechanism being used.
In short, the DAL focuses on how data is being represented and managed internally in the application, while the BLL focuses on exposing data in way that is meaningful to consumers.
Q) How many methods should a public API have, and which ones?
A) The design of an API is strictly related to what it is meant to achieve and by whom it will be used.In practice, this means that you should know the target audience of your API, and give them only what they need to get the job done.
Since it is impossible to predict all the possible ways an API will be used, it is important to decide which main use cases to support, and work to make them really straightforward in the API. A good principle to keep in mind is what Alan Kay once said:
Simple things should be simple,
complex things should be possible.
A few extensibility points can be built into the API to obtain a certain degree of flexibility. A common way to achieve this is to use interfaces to define the behavior of the API. This will allow consumers to replace or extend the pieces of the built-in functionality by providing custom implementations.
To your point, I believe it is a good idea to prefer fewer coarse-grained methods in the BLL to cover all the functionality required by an entire business operation.On the other side it is perfectly fine to have many smaller data-centric methods in the DAL to work with specific pieces of data.
UPDATE:
About interfaces
Interfaces should exist between layers. More specifically, classes should interact with classes from other layers exclusively through interfaces. For example, the DAL should expose interfaces for the classes used to access data, like IOrderHeaderTable or IOrderRepository depending on the design pattern being used.
The BLL should expose classes used to execute business operations, like IOrderManagementWorkflow, or ICustomerService.
Note: common functionality inside a layer can still be placed in base classes, since in modern Object-Oriented languages like C#, VB.NET and Java a class can both inherit from a base class and implement one or more interfaces.
Also, external parties who wish to customize the built-in functionality by implementing any of the provided public interfaces can do so without needing access to the source code. Interfaces should however be self-describing and well-documented, in order to make it easy for extenders to understand its semantics.
About the BLL
The BLL should be explicit about the business logic it supports. Therefore it is generally a good idea to have methods that are directly related to business operations.For added clarity, instead of overloading methods to work with different parameters, I believe it is better to have one method that accepts a single parameter. This parameter would be an object containing all the data for the method to work with. Some of that data could be required, some could be optional and would influence the effect of the operation.
Implementation detail: this kind of BLL API is fully supported by ObjectDataSource control built into ASP.NET Web Forms.
About the API
An API should contain all methods the designer can come up with, within the scope defined by the use cases the API is intended to support.

Exposing Entities Outside The Assembly

I'm looking for opinions on best practices with regards to passing entities beyond assembly boundaries. I'm using Linq-To-SQL, but the same question would apply to Entity Framework, NHibernate, etc.
I have an assembly that I want to reuse in multiple projects. In it there are several entities which I have so far kept internal, however I am finding it would be beneficial to return a list of the entities to the caller. Should I create a new class to encapsulate the data or should I just expose the entity itself.
For example, let's say I have an Address entity. Would it be better to have a method GetAddress(...) that returns the Address entity, or should I create another class with the same properties to expose the Address data?
Thanks!
One vote for just exposing the entities. In practice, the reasons for hiding the entities behind DTOs end up not really being relevant. For example, when was the last time you ripped out your internal data access layer for something entirely different that would have caused you to lose the auto-generated entity classes?
Plus you get to save time by avoiding the painful mapping exercise that ensues when you only expose DTOs. IMO, having an automatic mapping tool that uses reflection or something doesn't count as no pain because now you pay in performance what you would otherwise pay in tedium.
You might want to consider using a Repository to expose the Entities to outside assemblies. Here is a great CodeProject article on a generic Repository that can be used with EF.

ASP.NET Data Access Layer. Is using sqlhelper.cs bad?

I'm about to start a new .net web project.
Previsouly for my n-layer web apps i've used the "Microsoft Data Access Application Block" (sqlhelper.cs) for the data access, then an interface class to interface with the object classes. I'm aware this technique is a bit dated and was looking to use something a little more with the times.
I've looked into LINQ to SQL for data access but was restricted by lacking the many to many relationship.
The entity framework was a whole differnet approach that appears to have too larger learning curve.
Would there be anything wrong with using the sqlhelper.cs class to handle my data access?
Not at all. It supports the creation of multi-tier layers which separates our data access from our logic. I usually have business and data classes in separate folders and include the SqlHelper class with my Data DALC class.
I'm looking forward to the move towards LINQ and the use of generics. That's my next step and I think the the use of the SqlHelper promotes good coding practice in the meantime.
I first started using it when I "borrowed" it from the Enterprise Library which was huge. Ditto for the Entity Framework which I have yet to come to grips with in the workplace. but all in good time :-)
You can get simple and good example of sql helper class on http://followprogrammers.blogspot.com/
The last 8 months I've been using Linq and it works great for all the little jobs. The strong typing and drag-and-drop development makes it fantasticlly easy and super quick.
Previous to that, and still when Linq doesn't seem right I use SQLHelper. It cuts out all the donkey work of ADO.NET. I don't see any problem using it.

Resources