MVC principile in flex/as3 - apache-flex

I am currently working on several flex projects, that have gone in a relative short amount of time from prototype to almost quite large applications.
Time has come for some refactoring to be done, so obviously the mvc principle came into mind.
For reasons out my control a framework(i.e. robotlegs etc.) is not an option.
Here comes the question...what general guidelines should I take into consideration when designing the architecture?
Also...say for example that I have the following: View, Ctrl, Model.
From View:
var ctrlInstance:Ctrl= new Ctrl();
ctrl.performControllerMethod();
In Controller
public function performControllerMethod():void{
//dome some sort of processing and store the result in the model.
Model.instance.result = method_scope_result;
}
and based on the stored values update the view.
As far as storing values in the model, that will be later used dynamically in the application, via time filtering or other operation, everything is clear, but in cases where data just needs to go in(say a tree that gets populated once at loading time), is this really necessary to use the view->controller->model->view update scheme, or can I just make the controller implement IEventDispatcher and dispatch some custom events, that hold necessary data, after the controller operation finished.
Ex:
View:
var ctrlInstance:Ctrl= new Ctrl();
ctrl.addEventListener(CustomEv.HAPPY_END, onHappyEnd);
ctrl.addEventListener(CustomEv.SAD_END, onSadEnd);
ctrl.performControllerMethod();
Controller
public function performControllerMethod():void{
(processOk) ? dispatchEvent(new CustomEv(CustomEv.HAPPY_END, theData)) : dispatchEvent(new CustomEv(CustomEv.SAD_END));
}
When one of the event handlers kicks into action do a cleanup of the event listeners(via event.currentTarget).
As I realize that this might not be a question, but rather a discussion, I would love to get your opinions.
Thanks.

IMO, this whole question is framed in a way that misses the point of MVC, which is to avoid coupling between the model, view, and controller tiers. The View should know nothing of any other tier, because as soon as it starts having references to other parts of the architecture, you can't reuse it.
Static variables that are not constant are basically just asking for trouble http://misko.hevery.com/2009/07/31/how-to-think-about-oo/. Some people believe you can mitigate this by making sure that you only access these globals by using a Controller, but as soon as all Views have a Controller, you have a situation where the static Model can be changed literally from anywhere.
If you want to use the principles of a framework without using a particular framework, check out http://www.developria.com/2010/04/combining-the-timeline-with-oo.html and http://www.developria.com/2010/05/pass-the-eventdispatcher-pleas.html . But keep in mind that established frameworks have solved most of the issues you will encounter, you're probably better off just using one.

Related

Prism Forms difference between INavigatedAware and INavigatingAware?

I'm using Prism Forms 7.x in a Xamarin Forms app. Up to now, I was using the INavigatedAware interface in view models to check if a navigation to or from the respective view model happened. Now, I saw that there is INavigatingAware, which only provides the OnNavigatingTo method (so, the navigation is not yet finished).
My questions regarding INavigatingAware.OnNavigatingTo:
- Can I use INavigatingAware where I'm not interested in the OnNavigatedFrom call?
- Is it better in terms of performance to load data within OnNavigatingTo (before the BindingContext is set; so that e.g. the data binding don't need to be updated twice)?
Would be nice to share your experiences and best practices regarding those two interfaces.
INavigatingAware.OnNavigatingTo was first introduced to Prism to assist developers perform initialization logic similar to the ViewWillAppear.
To help better visualize this the events look something like this within the NavigationService:
Create the Page
Set the ViewModelLocator.Autowire property if it is null
Apply any behaviors from the PageBehaviorFactory
Call IConfirmNavigation.CanNavigate (and it's async counterpart) on the Page/ViewModel we're navigating away from
Call INavigatingAware.OnNavigatingTo
Push the page onto the NavigationStack
Call INavigatedAware.OnNavigated{From|To}
BREAKING CHANGE
Now, all of that said, we have had a tremendous amount of feedback on INavigatingAware (the essence of this very question), as a result over the overwhelming feedback from the Prism community INavigatingAware has been a hard obsolete in Prism 7.2. This means that it was removed from INavigationAware, and will throw a compile time error if you directly implement it. For those times where you got it for free from INavigationAware, it simply will not be called. Moving forward, we have introduced a series of interfaces to make this easier and more self documenting as to the intent.
New Interfaces & API
IInitialize.Initialize
IInitializeAsync.InitializeAsync
The new IInitialize interface is the direct replacement for INavigatingAware. We have long gotten feedback that people would like the ability to perform async tasks during initialization. The issue here is that this can cause a very noticeable delay in Navigation similar to IConfirmNavigationAsync. If you use either of those async interfaces, you will need to be sure to include some sort of busy/loading overlay on your screen.

Entity Framework with 3-tier architecture, different entities across domains

I know the title sounds like a duplicate of quite a few existing posts, but I've read quite a few of them and my situation is actually quite different. I would really appreciate it if anyone experienced with Entity Framework could offer some advice on the best architecture for the following scenario.
I have a wpf application with a 3-tier layout, Data Access Layer, Business Logic Layer, and UI Presentation Layer. The UI uses MVVM. DAL uses Entity Framework. UI and Data Access Layer each have their own models, UIModel and DataModel.
The current design uses a global DbContext for Entity Framework across the application. For a simple update operation, an entity is retrieved from the database as a DataModel, converted into a GUIModel, wired to the ViewModel and View for updates, and converted back into a DataModel to update in the database. And here's the problem: when the new DataModel is created from the conversion, it is no longer related to the original entity retrieved, and Entity Framework cannot perform the update because it now has two duplicate models of the same primary key attached to the same DbContext.
I did a little bit of research and found a couple of possible ways to fix this. One is to use a single model entity across all layers instead of separating GUIModel and DataModel, and break the global DbContext into unit of work. This seems to be a very common design, but my concerns with this approach is that the merge of GUIModel and DataModel violates the separation of responsibilities, and using unit of work required Business Layer to control the lifetime of DbContext, which also blurs the boundary between BLL and DAL.
The second alternative would be to use a local DbContext for every database query with a using block. This seems most memory efficient. But doing it this way makes lazy loading not possible, and eager loading all navigation properties in every query would likely affect performance. Also the short lived DbContexts require working completely in disconnected graph, which becomes quite complicated in terms of change tracking.
A third possibility would be to cache all original DataModels and update on those entities after the update.
I am new to Entity Framework and I'm sure there should be other ways to fix this issue too. I'll really appreciate it if anyone could offer some insights on the best way to approach this.
Better approach is when you are going for update call in your repository firstly get the entity by primary key now you are in dbContext with required entity to be updated then assign updated fields and update the Context.
Here is code:
public void UpdateEntity(Entity updatedEntity)
{
using (var db = new DBEntities())
{
var entity = db.Entities.Find(updatedEntity.Id);
if (entity!= null)
{
entity.Name = updatedEntity.Name;
entity.Description = updatedEntity.Description;
entity.LastModifiedBy = updatedEntity.LastModifiedBy;
entity.Value = updatedEntity.Value;
entity.LastModifiedOn = DateTime.Now;
db.SaveChanges();
}
}
}
I would recommend using separate Business Objects as described in your second alternative. In a multi-tier scenario, you would create reusable objects that support your use case from the UI perspective, modelling the behavior of your business domain (as you call them "GUIModel"). Those models should focus on the behavior of your system and only contain the data needed to support this behavior. This is in direct contrast to entity classes that focus on data.
Example: Northwind Database, Customers Table. The entity would be a class containing all properties of a customer, probably having navigation properties to related things. Would you really want to use this model when you need to display a list of condensed customer information in the dropdown of an auto completion search box? Would you want to use the same model to display customers together with their aggregated invoice data in a grid? You would need to load all customer information together with related invoices to your presentation tier. You probably don't want to do that.
If you had different models for different use cases, things would make more sense from an object oriented point of view:
Class CustomerSearchResult: Id, Name. GetCustomerEdit method.
Class CustomerInvoiceInfo: Id, Name, Aggregated invoice values. GetCustomerEdit method.
Class CustomerEdit: All properties you want to display and edit, timestamp for optimistic concurrency checks. Change tracking logic, validation logic. Methods that model behavior that you need while editing a customer.
Class CustomerEntity: this is your data object that resembles the customers table. You use it as DTO to initialize the other objects from the database or push changes back into the database. You don't send it across the wire.
This way, when you get to the data access layer, you can put your DbContext into using blocks and respect the unit of work pattern. Of course, you will need to reflect changes made to the CustomerEdit instance by creating a new CustomerEntity from it and reattach it to the context as modified:
context.Entry(entity).State = EntityState.Modified;
context.SaveChanges();
This seems complex and burdensome at first, but actually, Entity Framework doesn't contain any magic that helps you much in a disconnected (n-tier) scenario. If you try using things like lazy loading or keeping DbContext instances open all the time, things get out of hand pretty fast.
If you're looking for a framework that helps in creating Business Objects and supports multi tier architectures, take a look into CSLA.net. Disclaimer: Many people here don't like it. It will make things worse if used wrong. Still, it helped me in some projects and I'm happy with it.
You can attach entity to an existing dbContext by using the
following code, also here is a good post about entities states from MSDN
var existingBlog = new Blog { BlogId = 1, Name = "ADO.NET Blog" };
using (var context = new BloggingContext())
{
context.Entry(existingBlog).State = EntityState.Modified;
// Do some more work...
context.SaveChanges();
}
Regarding 3-Tier, I would like to start by giving small description with .net context for each tier
Presentation, This is the layer that return the results to the use and it could be in the form of ASP.Net Website, Windows Forms, Web Api, WCF service or anything else
Business, this should include the domain model of your business, business logic and services that provide business across multiple domain entities
Data access/ persistence, This layer should include the logic to persist and retrieve the domain model into durable media such as DB, file system,...
Generally the common issue here is which model goes into which layer for example should class X goes into presentation or business and I recommend an easy way to help you take your decision which is introducing new sibling layer so ask your self if you would like to build another presentation layer as console instead of windows will you copy and paste that logic into the new layer? if yes then there is good probability that your classes not in the right place.
Finally some concrete recommendations,
Keep each layer having its models as each layer has unique
responsibility, also there are good frameworks that might help you in
mapping between models such as AutoMapper
Don't transfer Entity Framework models across the layers as this will ruin the separation of concerns also it has more and more issues if you enabled lazy loading.
Try to avoid lazy loading unless you know what you are doing, one of the common pitfalls is Select N+1 and here is a good article describing it.
Also if you have a complex business, try to separate between querying the system and updating it by applying CQRS pattern, and there are some frameworks that can help you such as Dapper

Passing ViewModel from Presentation to Service - Is it Okay?

In one of my views, I have a ViewModel which I populate from two tables, and then bind a List<ViewModel> to an editable GridView (ASP.NET Web Forms).
Now I need to send that edited List<ViewModel> back to the Services layer to update it in the database.
My question is - is it Okay to send the ViewModel back to Services, or should it stay in the Presentation? If not - should I better use a DTO? Many thanks.
Nice question !
After several (hard) debates with my teammates + my experience with MVC applications, I would not recommend to pass viewmodel to your service / domain layer.
ViewModel belongs to presentation, no matter what.
Because viewModel can be a combination of different models (e.g : 1 viewModel built from 10 models), your service layer should only work with your domain entities.
Otherwise, your service layer will end up to be unusable because constrained by your viewModels which are specifics for one view.
Nice tools like https://github.com/AutoMapper/AutoMapper were made to make the mapping job.
I would not do it. My rule is: supply service methods with everything they need to do their job and nothing more.
Why?
Because it reduces coupling. More often than not service methods are addressed from several sources (consumers). It is much easier for a consumer to fulfil a simple method signature than having to build a relatively complex object like a view model that it otherwise may have nothing to do with. It may even need a reference to an assembly it wouldn't need otherwise.
It greatly reduces maintenance effort. I think an average developer spends more than 50% of his time inspecting and tracking existing code (maybe even much more). Now everybody knows that looking for something that is not there takes disproportionally much time: you must have been everywhere to be sure. If a method receives arguments (or object with properties) that are not used directly or further down the call stack you or others will walk this long road time and again.
So if there is anything in the view model that does not play a part in the service method, don't use it to call the method.
Yes. I am pretty sure it is ok.
Try to use MS Entity Framework and it will help you allots.

Entity Framework ( Questions on POCO, Context, and DTO)

I have been reading about entity framework over the past couple of days and have managed to get a fair idea of using it but I still have a couple of questions some of which might seem a bit too basic. For perspective I am using entity framework 4.0 in an asp.net web application.If you can answer any of the questions please go ahead.
What advantage do I get by using POCO templates. I understand that if I wish to get persistence ignorance and keep my Entities clear of any information related to storage POCO entities are the way to go. Also I could switch from Entity framework to say NHibernate with relative ease when using POCO entities? Apart from loose coupling is there any significant reason for me to go towards POCO entities. Also if I do use POCO do I end up losing anything. I still get change tracking and lazy loading with the help of proxies?
Is it normal practice to use the Entities of the EF model as Data transfer Objects or Business Objects. i.e for example I have a separate class library for my entity model.Supposing I am using MVP , where I want a list of Employee's in a company. The presenter would request my business logic functions which would query the entity model for the list of Employee's and return the list of entities to the presenter. In this case my presenter would need to have a reference to the EF model. Is this the correct way? In the case of my asp.net web applciation it shouldnt be a problem but if I am using web services how does this work? Is this the reason to go towards POCO entities?
Supposing The Employee entity has a navigation property to a company table. If I use and wrap the data context in an 'using' block , and try to access the navigation property in the BL I am assuming I would get an exception. Would I also get an exception if I turned off lazyloading and used the 'include' linq query to get the entity? On a previous post someone recommended I use an context per request implying that the context remains active even when I am in the BL. I am assuming I would still need to detach the object and attach it to the context on my next request if I wish to persist any changes I make? or Instead should I just query for the object again with the new context and update it?
This question has more to do with organizing files/best practices and is a followup to a question i posted earlier. When I am using separate files based on entities to organize my data access layer, what is the best practice to organize my queries involving joins between multiple tables. I am still a bit hazy on organization. Have tried searching online but havent had much help.
Terrific question. My first recommendation is to think in patterns. With that said...
You pretty much nailed the advantages of using POCO. There are some distinct advantages to decoupling your business objects (POCO entities) from your data access layer. But the primary reason is like you said the ability to change or modify layers below. However using POCO you are essentially following the Code First (CF) approach. Personally, I consider it Code In Parallel depending upon your software development life cycle. You still have all the bells and whistles that data or model first approach have and some since you can extend the DbContext which is ObjectContext under the hood. I read an article, which I cannot seem to find, that CF is the future of Entity Framework. Lastly the nice thing with POCO is you are able to incorporate validation rules here or else where. You can also provide projections. Lets say you have Date of Birth but you want an Age property as well. That now becomes a no brainer as the Age property is ignored when mapping to the database.
Personally I create my own business objects (POCO) for large projects that tend to have a life of its own where change is a way of life. Another thought is scalability and maintainability. What if down the road I choose to split functionality between applications where, like you mentioned web services, functionality is now delivered from two disparate locations. If you have encapsulated your business objects and DAL within the same code block separation or scalability has now become a bit more complex. However, consider the project. It may be small with very little future change so no need to throw a grenade to kill a fly. At which time data first might be the way to go and let edmx file represent your objects. So don't marry yourself to one technology or one methodology/pattern. Do what makes sense for your time and business.
Using statements are perfectly fine. In fact I've recently been turned on to then wrapping that within a TransactionScope. If an error occurs rollbacks are inherent. Next, something to consider is the UnitOfWork. UnitOfWork pattern encapsulates a snapshot of what needs to be performed where the Data Context is the boundaries from which you work within. For each UnitOfWork you have a subject for which work is to be performed on. For example an Employee. So if you are to save Employee information to keep it simple you would make a call to the BL service or repository (which ever). There you pass in the Employee Id, perform some work under that UnitOfWork where it is either instantiated in the constructor or using Dependency Injections (DI or IoC). Easy starter is StructureMap. There the service makes the necessary calls to your UnitOfWork (DbContext) then returns control back upstream (e.g. UI).
The best way to learn here is to view others code. I'd start with some Microsoft examples. I'd start with Nerd Dinner (http://nerddinner.codeplex.com/) then build off that.
Additional Reading:
Use prototype pattern or not
http://weblogs.asp.net/manavi/archive/2011/05/17/associations-in-ef-4-1-code-first-part-6-many-valued-associations.aspx
[EDIT]
NightHawk457, I'm terribly sorry for not responding to your questions. Hopefully you figured it out but for future readers...
To help everyone visualize, imagine the below Architecture using the Domain Model and Repository as an example. Remember, there are many ways to skin a cat so take this and make it your own and don't forget my Grenade comment above.
Data Layer (Data Access): MyDbContext : DbContext, IUnitOfWork, where IUnitWork contracts the CRUD operations.
Data Repository (Data Access / Business Logic): MyDomainObjectRepository : IMyDomainObjectRepository, which receives IUnitOfWork by Factory class or Dependency Injection. Calls MyDomainObject validation on CRUD operations.
Domain Model (Business Logic): MyDomainObject using [Custom] Validation Attributes. Read this for pros/cons.
MVVM / MVC / WCF (Presentation / Service Layers): What ever additional layers you chose, you now have access to your data which is wrapped nicely in smaller modules who are self encapsulating of their function. The presentation layer (e.g. ViewModel, Controller, Code-Behind, etc.) can then receive an IMyObjectRepository by a Factory class or by Dependency Injection.
Tips:
Pass connection string into MyDbContext so you can reuse MyDbContext.
MySql does not play well with System.Transactions.TransactionScope, example. I don't recall exactly but it was something MySql did not support. This makes Testing a bit difficult since we have created this level of separation.
Create a Test project for each layer and at the minimum test general functionality/rules.
Each Domain Object should extend base object with ID field at minimum. Also do not implement Key attributes here. Domain Object should not describe architecture but rather the specific data as an entity. Even on Code First this can be achieved by the Fluent API.
Think generics when creating MyDbContext. ;) Read Diego's post.
In ASP.NET, the repositories are nice to use with ObjectDataSources.
As you can see, there is clear separation of roles where IUnitOfWork and IMyDomainObjectRepository are the Interfaces which expose the above layers functionality. And as an example, IUnitOfWork could be NHibernate, Entity Framework, LinqToSql or ADO.NET where a change to the factory class or dependency injection registration is all that has to change. FYI, I've heard the Repository called the Service Layer as well. Personally I like the first name to not be confused with Web Services. The next big take away from this structure is realizing the scope for you Database Context (IUnitOfWork). A simple example would be a ASP.NET page where for each page there is one and only one IUnitOfWork for either each repository or for that scope of work. Same holds true for ViewModels, Controllers, etc. So let's say you need to utilize two repositories, EmployeeRepository and HRRepository. You then could share the IUnitOfWork between both or not. To cross page, ViewModel or Controller boundaries, we use the ID for entities where they are then pulled from the DB and work is performed. You could alternatively pass a DTO across boundaries and attach to the context but then you begin losing separation of layers.
To continue, POCO classes do not have to be auto generated. In fact you can create your Entity Classes from scratch and perform the mapping in your extended DbContext class inside the OnModelCreating(DbModelBuilder mb) method. Start here, then here and note the Additional Resources, google Fluent API and read this post by Diego.
As for validation, this is an interesting point because it would be GREAT if all Business Rules could be validated in one location. Well, as we all know that doesn't work real well. So here is my recommendation, keep all data level validation (i.e. required, range, format, etc.) with data annotation as much as possible in the domain object and leave process validation in the Repository with clear roles of the Repository (i.e. if (isEmployee) do this, else that). I say clear, such that you do not want to add an Employee in two different Repositories where validation has to be duplicated. To call the validation, start here. Capture the ValidationResults and send upstream with a MyRepositoryValidationException which contains a collection of validations errors (e.g. Employee is required) which can be presented to the presentation layer. With all that said, don't forget to perform validation at the presentation layer. You don't want post backs to make sure an Employee has a valid Email, for example.
Just remember to balance time and effort with complexity. For something simple, use Data First or Model First with your EDMX file. Then lay a repository on top of that which also contains all the validation rules.

Best practices concerning view model and model updates with a subset of the fields

By picking MVC for developing our new site, I find myself in the midst of "best practices" being developed around me in apparent real time. Two weeks ago, NerdDinner was my guide but with the development of MVC 2, even it seems outdated. It's an thrilling experience and I feel privileged to be in close contact with intelligent programmers daily.
Right now I've stumbled upon an issue I can't seem to get a straight answer on - from all the blogs anyway - and I'd like to get some insight from the community. It's about Editing (read: Edit action). The bulk of material out there, tutorials and blogs, deal with creating and view the model. So while this question may not spell out a question, I hope to get some discussion going, contributing to my decision about the path of development I'm to take.
My model represents a user with several fields like name, address and email. All the names, in fact, on field each for first name, last name and middle name. The Details view displays all these fields but you can change only one set of fields at a time, for instance, your names. The user expands a form while the other fields are still visible above and below. So the form that is posted back contains a subset of the fields representing the model.
While this is appealing to us and our layout concerns, for various reasons, it is to be shunned by serious MVC-developers. I've been reading about some patterns and best practices and it seems that this is not in key with the paradigm of viewmodel == view. Or have I got it wrong?
Anyway, NerdDinner dictates using FormCollection och UpdateModel. All the null fields are happily ignored. Since then, the MVC-community has abandoned this approach to such a degree that a bug in MVC 2 was not discovered. UpdateModel does not work without a complete model in your formcollection.
The view model pattern receiving most praise seems to be Dedicated view model that contains a custom view model entity and is the only one that my design issue could be made compatible with. It entails a tedious amount of mapping, albeit lightened by the use of AutoMapper and the ideas of Jimmy Bogard, that may or may not be worthwhile. He also proposes a 1:1 relationship between view and view model.
In keeping with these design paradigms, I am to create a view and associated view for each of my expanding sets of fields. The view models would each be nearly identical, differing only in the fields which are read-only, the views also containing much repeated markup. This seems absurd to me. In future I may want to be able to display two, more or all sets of fields open simultaneously.
I will most attentively read the discussion I hope to spark. Many thanks in advance.
I am doing it like this (the mapping is done automatically inside modelBuilder with the ValueInjecter):
I have a sample asp.net-mvc application where I demonstrate the best practices of doing this in mvc, you can see it in the download of the valueinjecter
public ActionResult Edit(long id)
{
return View(modelBuilder.BuildModel(personService.Get(id)));
}
[HttpPost]
public ActionResult Edit(PersonViewModel model)
{
if (!ModelState.IsValid)
return View(modelBuilder.RebuildModel(model));
personService.Save(modelBuilder.BuildEntity(model));
return RedirectToAction("Index");
}
a quick demo of the ValueInjecter:
//build viewmodel
personViewModel.InjectFrom(person)
.InjectFrom<CountryToLookup>(person);
//build entity
person.InjectFrom(personViewModel)
.InjectFrom<LookupToCountry>(personViewModel);
There have been a few posts recently around the issue of validating your models, resulting in this post from Brad Wilson "Input Validation vs. Model Validation in ASP.NET MVC".
The initial issue was to do with how ASP.NET MVC handled validating a posted model, and if there were elements of your model that you didn't want edited and didn't supply fields for in the view, but your controllers were working with the whole model, it's possible that someone could craft a POST to your controller with the additional fields.
Therefore using a View specific Model enables you to ensure that only the fields you want edited can be edited.
I have the exact same problem, but I wouldn't be able to formulate it that well.
In my case, there would be tons of ViewModels, because different users would see different forms based on a set of roles. I think the 1:1 relation between ViewModel and View is very vague. What if I write an uber-View which pretty much simply uses EditorForModel and not much more? Now I have one, albeit highly degenerate, view for everything, so I also have only one ViewModel?
My idea was to write an EditorForModel that works not only based on reflection (that is, information known at compile time), but also on (domain specific) runtime rules, for example governed by the current user's role, the current time, etc. Consequently, one also needs to write a custom ModelBinder with validation as well as a custom mapping from Model to ViewModel. Still, this keeps me from writing stupid and thus error-prone code.
Since my Model (or DomainModel) contains a lot of logic, I don't want it to be modified via ModelBinding at all. Moreover, since it is impossible to know what fields will be present at compile time, providing an appropriate ViewModel is impossible. However, the 'full', i.e. maximal ViewModel is known. Mapping from the ViewModel to the Model again involves custom code, but as long as the rules can be formalized, that should work out.
Sorry my text is very confusing, but I am very confused right now myself, plus I gotta run. Like C.T., couldn't comment either.
Check this out. This is the way to go with ASP.NET MVC 2.
public void Update(MyModel model)
{
var myModelObject = MyRepository.GetInstance(model.Id);
if(myModelObject != null)
{
ModelCopier.CopyModel(model, myModelObject);
}
MyRepository.Save(myModelObject);
}
ModelCopier.CopyModel(obj from, obj to) is a new function in the latest MvcFutures. Also be sure to check out the Extensible Model Binder in MVC Futures 2.

Resources