ASP.NET SPA with a legacy domain objects - asp.net

Looking at the Single page application beta in the MVC 4 I don't see how I can use my legacy domain objects as the model. It seems to require that the model use the entity framework to using DbDataController to get the data etc.
I do not understand the entity framework so I am probably missing something.
How can I use my legacy domain (with it's own DAL) in the SPA of MVC 4?

This was answered by somebody else in an ASP.NET forum.
You won't be able to use anything other than EF if you want to use some of these RAD tools. However, SPA builds on top of MVC, so you should be able to build your own version rather easily. The important components would be building a DataController on top of ApiController and a js consumer for the service provided by your DataController. It's possible that if you were to format your models in the same format as the EF output (I think it's just OData) you could use upshot.js, as well and only have to implement a DataController to format your domain models.
I will add the following after working with it for a couple of days that you could, theoretically, use it if the following are handled/fixed by you and future versions of the SPA.
You can create a controller that inherits from System.Web.Http.Data.DataController (and maybe even ApiController). The objects it returns then must just have a property decorated with the System.ComponentModel.DataAnnotations.Key() attribute. I can get the views to work fine but some of the more advance features, like grouping, I am having problems with.
Readonly property will not be returned I guess because of a problem with the current JSON serializer used. Should be fixed.
Of course the entire object will be serialized which can be very problematic if your domain objects are complex with child objects. Especially if some of those objects have serialization issues of their own.
Related to the complex serialization the current JSON serializer cannot handle circular references in the domain objects referenced.
I have also run into problems getting update/deletes/inserts being posted back when using my own Controller that inherits from System.Web.Http.Data.DataController (the examples use DBDataController).

Related

Which approach to use Asp.Net MVC 3 when the data source is an EF context

For an ASP.Net MVC 3 website, which will be a big website( 6Month/men for the FIRST version). I'm searching what is the right approach to use the power of asp.net MVC and the power of EF.
The "power" I hope I could use of EF:
POCO generation
LINQ Queries to interrogate the database
The navigation between objects parents <-> children
The lazy loading
The "power" I hope I could use with MVC(regarding data):
Data validation
?? The usage of EF object as Strong type for my view?
But I've several concern:
The "recursive" serialization which will happen if I serialize my EF (with bi-directionnal links)for JSON, how to avoid this?
I cannot put validation attribute on POCO class
So, I know that my question is a little "generic", but I can't find a good link which point me to a direction to solve all problems which comes with the combination of those two tech, do you know a website or do you have already got those kind of problems?
The usage of EF object as Strong type for my view?
No this is not a power. In most cases this is just an issue which raises both your main concerns. Use specialized model views for your views and Json handling actions and you will be fine. If you worry about conversions between model views and entities check for example AutoMapper to simplify this.
Btw. lazy loading in web application can be issue as well. You almost always know what data you need to handle current request so load them directly instead of using lazy loading.

Entity Framework ( Questions on POCO, Context, and DTO)

I have been reading about entity framework over the past couple of days and have managed to get a fair idea of using it but I still have a couple of questions some of which might seem a bit too basic. For perspective I am using entity framework 4.0 in an asp.net web application.If you can answer any of the questions please go ahead.
What advantage do I get by using POCO templates. I understand that if I wish to get persistence ignorance and keep my Entities clear of any information related to storage POCO entities are the way to go. Also I could switch from Entity framework to say NHibernate with relative ease when using POCO entities? Apart from loose coupling is there any significant reason for me to go towards POCO entities. Also if I do use POCO do I end up losing anything. I still get change tracking and lazy loading with the help of proxies?
Is it normal practice to use the Entities of the EF model as Data transfer Objects or Business Objects. i.e for example I have a separate class library for my entity model.Supposing I am using MVP , where I want a list of Employee's in a company. The presenter would request my business logic functions which would query the entity model for the list of Employee's and return the list of entities to the presenter. In this case my presenter would need to have a reference to the EF model. Is this the correct way? In the case of my asp.net web applciation it shouldnt be a problem but if I am using web services how does this work? Is this the reason to go towards POCO entities?
Supposing The Employee entity has a navigation property to a company table. If I use and wrap the data context in an 'using' block , and try to access the navigation property in the BL I am assuming I would get an exception. Would I also get an exception if I turned off lazyloading and used the 'include' linq query to get the entity? On a previous post someone recommended I use an context per request implying that the context remains active even when I am in the BL. I am assuming I would still need to detach the object and attach it to the context on my next request if I wish to persist any changes I make? or Instead should I just query for the object again with the new context and update it?
This question has more to do with organizing files/best practices and is a followup to a question i posted earlier. When I am using separate files based on entities to organize my data access layer, what is the best practice to organize my queries involving joins between multiple tables. I am still a bit hazy on organization. Have tried searching online but havent had much help.
Terrific question. My first recommendation is to think in patterns. With that said...
You pretty much nailed the advantages of using POCO. There are some distinct advantages to decoupling your business objects (POCO entities) from your data access layer. But the primary reason is like you said the ability to change or modify layers below. However using POCO you are essentially following the Code First (CF) approach. Personally, I consider it Code In Parallel depending upon your software development life cycle. You still have all the bells and whistles that data or model first approach have and some since you can extend the DbContext which is ObjectContext under the hood. I read an article, which I cannot seem to find, that CF is the future of Entity Framework. Lastly the nice thing with POCO is you are able to incorporate validation rules here or else where. You can also provide projections. Lets say you have Date of Birth but you want an Age property as well. That now becomes a no brainer as the Age property is ignored when mapping to the database.
Personally I create my own business objects (POCO) for large projects that tend to have a life of its own where change is a way of life. Another thought is scalability and maintainability. What if down the road I choose to split functionality between applications where, like you mentioned web services, functionality is now delivered from two disparate locations. If you have encapsulated your business objects and DAL within the same code block separation or scalability has now become a bit more complex. However, consider the project. It may be small with very little future change so no need to throw a grenade to kill a fly. At which time data first might be the way to go and let edmx file represent your objects. So don't marry yourself to one technology or one methodology/pattern. Do what makes sense for your time and business.
Using statements are perfectly fine. In fact I've recently been turned on to then wrapping that within a TransactionScope. If an error occurs rollbacks are inherent. Next, something to consider is the UnitOfWork. UnitOfWork pattern encapsulates a snapshot of what needs to be performed where the Data Context is the boundaries from which you work within. For each UnitOfWork you have a subject for which work is to be performed on. For example an Employee. So if you are to save Employee information to keep it simple you would make a call to the BL service or repository (which ever). There you pass in the Employee Id, perform some work under that UnitOfWork where it is either instantiated in the constructor or using Dependency Injections (DI or IoC). Easy starter is StructureMap. There the service makes the necessary calls to your UnitOfWork (DbContext) then returns control back upstream (e.g. UI).
The best way to learn here is to view others code. I'd start with some Microsoft examples. I'd start with Nerd Dinner (http://nerddinner.codeplex.com/) then build off that.
Additional Reading:
Use prototype pattern or not
http://weblogs.asp.net/manavi/archive/2011/05/17/associations-in-ef-4-1-code-first-part-6-many-valued-associations.aspx
[EDIT]
NightHawk457, I'm terribly sorry for not responding to your questions. Hopefully you figured it out but for future readers...
To help everyone visualize, imagine the below Architecture using the Domain Model and Repository as an example. Remember, there are many ways to skin a cat so take this and make it your own and don't forget my Grenade comment above.
Data Layer (Data Access): MyDbContext : DbContext, IUnitOfWork, where IUnitWork contracts the CRUD operations.
Data Repository (Data Access / Business Logic): MyDomainObjectRepository : IMyDomainObjectRepository, which receives IUnitOfWork by Factory class or Dependency Injection. Calls MyDomainObject validation on CRUD operations.
Domain Model (Business Logic): MyDomainObject using [Custom] Validation Attributes. Read this for pros/cons.
MVVM / MVC / WCF (Presentation / Service Layers): What ever additional layers you chose, you now have access to your data which is wrapped nicely in smaller modules who are self encapsulating of their function. The presentation layer (e.g. ViewModel, Controller, Code-Behind, etc.) can then receive an IMyObjectRepository by a Factory class or by Dependency Injection.
Tips:
Pass connection string into MyDbContext so you can reuse MyDbContext.
MySql does not play well with System.Transactions.TransactionScope, example. I don't recall exactly but it was something MySql did not support. This makes Testing a bit difficult since we have created this level of separation.
Create a Test project for each layer and at the minimum test general functionality/rules.
Each Domain Object should extend base object with ID field at minimum. Also do not implement Key attributes here. Domain Object should not describe architecture but rather the specific data as an entity. Even on Code First this can be achieved by the Fluent API.
Think generics when creating MyDbContext. ;) Read Diego's post.
In ASP.NET, the repositories are nice to use with ObjectDataSources.
As you can see, there is clear separation of roles where IUnitOfWork and IMyDomainObjectRepository are the Interfaces which expose the above layers functionality. And as an example, IUnitOfWork could be NHibernate, Entity Framework, LinqToSql or ADO.NET where a change to the factory class or dependency injection registration is all that has to change. FYI, I've heard the Repository called the Service Layer as well. Personally I like the first name to not be confused with Web Services. The next big take away from this structure is realizing the scope for you Database Context (IUnitOfWork). A simple example would be a ASP.NET page where for each page there is one and only one IUnitOfWork for either each repository or for that scope of work. Same holds true for ViewModels, Controllers, etc. So let's say you need to utilize two repositories, EmployeeRepository and HRRepository. You then could share the IUnitOfWork between both or not. To cross page, ViewModel or Controller boundaries, we use the ID for entities where they are then pulled from the DB and work is performed. You could alternatively pass a DTO across boundaries and attach to the context but then you begin losing separation of layers.
To continue, POCO classes do not have to be auto generated. In fact you can create your Entity Classes from scratch and perform the mapping in your extended DbContext class inside the OnModelCreating(DbModelBuilder mb) method. Start here, then here and note the Additional Resources, google Fluent API and read this post by Diego.
As for validation, this is an interesting point because it would be GREAT if all Business Rules could be validated in one location. Well, as we all know that doesn't work real well. So here is my recommendation, keep all data level validation (i.e. required, range, format, etc.) with data annotation as much as possible in the domain object and leave process validation in the Repository with clear roles of the Repository (i.e. if (isEmployee) do this, else that). I say clear, such that you do not want to add an Employee in two different Repositories where validation has to be duplicated. To call the validation, start here. Capture the ValidationResults and send upstream with a MyRepositoryValidationException which contains a collection of validations errors (e.g. Employee is required) which can be presented to the presentation layer. With all that said, don't forget to perform validation at the presentation layer. You don't want post backs to make sure an Employee has a valid Email, for example.
Just remember to balance time and effort with complexity. For something simple, use Data First or Model First with your EDMX file. Then lay a repository on top of that which also contains all the validation rules.

Elegant validation techniques for Entity First MVC3 site?

Having never done an MVC site I am about to start a project for a very large one. I feel confidant enough to do it, but I have one thing I need help figuring out.
We are definitely going to be using an "Entity First" method and have a single .edmx file defining the models, there are multiple reasons for this but just know that this is a definite piece of the puzzle.
So the piece I need to figure out is how to come up with an elegant way to do validations against Entities on a page, without hand coding each page, at least for the majority of things.
Are there any already popular methods for doing some basic validations? Things like MaxLength or Required or MinDate, etc?
Anything more complex than that and I understand I'd have to code it myself, but this site is going to be very large and I really need to find a way to speed some of the basic tasks up.
EDIT
I should point out a couple important facts.
1) Our database already exists and was created by a DBA before developers even came into the picture.
2) There are hundreds of tables and stored procedures already created.
3) When changes will need to be made to the database, they will go through the DBA, who we will not always have instant access too.
First of all, if you use Entity Framework Code First, you don't have a .edxm file storing your models or relationships between them: you just write your POCO (Plain Old CLR Object) classes, and that's it — Code First will figure out the relations between your models based on naming conventions.
To validate your (view) models, I recommend using FluentValidation or DataAnnotations. Both let you define validation rules in one place, either using a fluent validation API in different entity validation classes (FluentValidation) or using attributes to decorate your entity properties (DataAnnotations). The advantage of DataAnnotations over FluentValidation is that you get additional client-side validation out of the box.
Whichever framework you choose, both ship with a bunch of predefined validation rules like Required, Range, or MaxLength (see Fluent Validation for .NET or System.ComponentModel.DataAnnotations Namespace for examples).
I would 100% absolutely still use some POCO class. Download the DBContext generator template that will then create the code first from your model, OR use the Entity Framework Power Tools to reverse engineer an existing database. The down side to these methods though is that you won't get client side validation, only when saving it will you get the validation. You can however still add your validation attributes if you so choose for client side validation in addition using
MetaData Classes to use Data Annotations on your properties and get client side validation using the built in jQuery unobtrusive validation.
However - what we're talking about here goes against the basic design of MVC good practices anyways. Ideally your views should have ViewModels that are at times only a portion of an entity, in this case your validation attributes are still generated on your properties as DataAnnotations in MetaDataClasses.
If you feel this is all too much work and you are fine with just the server validation and have made a decision not to use ViewModels and rely on the Entity Frameworks validation- or use ViewModels and still rely on EF validation, then you will need a handler like the following in your controller (or other layer that has been given access to ModelState) to catch the following exception. Note I use AutoMapper to copy the properties between my ViewModel to my Entity.
Entity Framework Power Tools (right click in your project in solution explorer after its installed and there will be an new 'Entity Framework' menu -> Reverse Engineer - note it doesnt generate [TimeStamp] attributes and forgets to put in schema names - besides that its pretty good)
[HttpPost]
public ActionResult Create(CustomerCreateViewModel customerViewModel)
{
if (ModelState.IsValid)
{
try
{
Mapper.CreateMap();
Customer customer = Mapper.Map(customerViewModel);
var repository = new CustomerRepository(db);
repository.Save(customer);
return RedirectToAction("Edit", new { id=customer.CustomerId});
}
catch (DbEntityValidationException ex)
{
foreach (var error in ex.EntityValidationErrors.First().ValidationErrors)
{
this.ModelState.AddModelError(error.PropertyName, error.ErrorMessage);
}
return View();
}
}
return View(customerViewModel);
}
Jon Galloway has a nice article called Generating EF Code First model classes from an existing database which I think will help you greatly in getting your application up and going from what you described.
Secondly, from having built out our own MVC application so far, I've found that you're really not going to be working directly with Entity Framework models directly very often. Most of the time you'll end up with some type of view model for doing your gets and posts to. Adding DataAnnotations to those class properties will make it very easy for you to do your validations on the client side. Once you have validated the data from the client side and checked your entities against any business rules, you really should then be able to trust the data and use EF to do your basic CRUD work with.
Good luck, and hope this helps you some with your project.
What Marius is trying to tell you is that "Code First" refers to your "model" being defined by fluent code mappings that do not rely on an .edmx file. Therefore, if you're using an .edmx file, you're not doing "code first". You're doing either "Database First" or "Model First" (both of which use the .edmx).
In your case, you already have a database, so you're using the "Database First" approach, using EF 4.1 DbContext. This is not "Code First" (or as you incorrectly stated, Entity First). This is not a semantic quibble, as "code first" has a very specific meaning. This is not it.
Now, on to the rest of your question. Since all your database access has to go through stored procedures, Entity Framework is not a good choice in my opinion. You would be better off using something like nhibernate, which has much better stored procedure support.
EF is intended to represent your relational data model in objects, and generates all its own sql to access and fill these objects. If you have to go through the sprocs, EF will be a constant uphill battle for you.

Using multiple ObjectContexts in Entity Framework 4 with the repository/uow pattern

i am using EF4 and StructureMap in an asp.net web application. I am using the repository/unit of work patterns as detailed in this post. In the code, there is a line that delegates the setup of an ObjectContext in global.asax.
EntityUnitOfWorkFactory.SetObjectContext(() => new MyObjectContext());
On the web page code-behind, you can create a generic repository interface like so ...
IRepository<MyPocoObject> ds = ObjectFactory.GetInstance<IRepository<MyPocoObject>>();
My question is what is a good approach to refactoring this code so that I can use more than one ObjectContext and differentiate between them in the code-behind? Basically i have two databases/entity models in my application and need to query them both on the same page.
The Unit of Work is used to manage persistence across multiple repositories, not multiple object contexts.
You're not going to be able to persist changes across multiple contexts using a unit of work, as the UoW is simply implemented as a wrapper for a ObjectContext. Therefore, you'll need two unit of works.
Overall, things are going to get messy. You're going to have two OCs newed up and disposed each HTTP request, not to mention transaction management is going to be a nightmare.
Must you have two ObjectContexts? What is the reasoning for this? If it's for scalability, don't bother; it's going to be too painful for other things like your repository, unit of work and http scope management.
It's hard to provide good advice without seeing how you have your repositories set up.
Try creating wrapper classes for each object context, each implementing IUnitOfWork and a secondary unique interface (IEfSqlContext1, etc which represents one of your models/contexts).
Then you can inject whichever context you want.
As I said though, try and avoid having two EDMX/Contexts. It's more trouble than it's worth.

What is Reflection?

I am VERY new to ASP.NET. I come from a VB6 / ASP (classic) / SQL Server 2000 background. I am reading a lot about Visual Studio 2008 (have installed it and am poking around). I have read about "reflection" and would like someone to explain, as best as you can to an older developer of the technologies I've written above, what exactly Reflection is and why I would use it... I am having trouble getting my head around that. Thanks!
Reflection is how you can explore the internals of different Types, without normally having access (ie. private, protected, etc members).
It's also used to dynamically load DLL's and get access to types and methods defined in them without statically compiling them into your project.
In a nutshell: Reflection is your toolkit for peeking under the hood of a piece of code.
As to why you would use it, it's generally only used in complex situations, or code analysis. The other common use is for loading precompiled plugins into your project.
Reflection lets you programmatically load an assembly, get a list of all the types in an assembly, get a list of all the properties and methods in these types, etc.
As an example:
myobject.GetType().GetProperty("MyProperty").SetValue(myobject, "wicked!", null)
It allows the internals of an object to be reflected to the outside world (code that is using said objects).
A practical use in statically typed languages like C# (and Java) is to allow invocation of methods/members at runtime via a string (eg the name of the method - perhaps you don't know the name of the method you will use at compile time).
In the context of dynamic languages I haven't heard the term as much (as generally you don't worry about the above), other then perhaps to iterate through a list of methods/members etc...
Reflection is .Net's means to manipulate or extract information of an assembly, class or method at run time. For example, you can create a class at runtime, including it's methods. As stated by monoxide, reflection is used to dynamically load assembly as plugins, or in advance cases, it is used to create .Net compiler targeting .Net, like IronPython.
Updated: You may refer to the topic on metaprogramming and its related topics for more details.
When you build any assembly in .NET (ASP.NET, Windows Forms, Command line, class library etc), a number of meta-data "definition tables" are also created within the assembly storing information about methods, fields and types corresponding to the types, fields and methods you wrote in your code.
The classes in System.Reflection namespace in .NET allow you to enumerate and interate over these tables, providing an "object model" for you to query and access items in these tables.
One common use of Reflection is providing extensibility (plug-ins) to your application. For example, Reflection allows you to load an assembly dynamically from a file path, query its types for a specific useful type (such as an Interface your application can call) and then actually invoke a method on this external assembly.
Custom Attributes also go hand in hand with reflection. For example the NUnit unit testing framework allows you to indicate a testing class and test methods by adding [Test] {TestFixture] attributes to your own code.
However then the NUnit test runner must use Reflection to load your assembly, search for all occurrences of methods that have the test attribute and then actually call your test.
This is simplifying it a lot, however it gives you a good practical example of where Reflection is essential.
Reflection certainly is powerful, however be ware that it allows you to completely disregard the fundamental concept of access modifiers (encapsulation) in object oriented programming.
For example you can easily use it to retrieve a list of Private methods in a class and actually call them. For this reason you need to think carefully about how and where you use it to avoid bypassing encapsulation and very tightly coupling (bad) code.
Reflection is the process of inspecting the metadata of an application. In other words,When reading attributes, you’ve already looked at some of the functionality that reflection
offers. Reflection enables an application to collect information about itself and act on this in-
formation. Reflection is slower than normally executing static code. It can, however, give you
a flexibility that static code can’t provide

Resources