Elegant validation techniques for Entity First MVC3 site? - asp.net

Having never done an MVC site I am about to start a project for a very large one. I feel confidant enough to do it, but I have one thing I need help figuring out.
We are definitely going to be using an "Entity First" method and have a single .edmx file defining the models, there are multiple reasons for this but just know that this is a definite piece of the puzzle.
So the piece I need to figure out is how to come up with an elegant way to do validations against Entities on a page, without hand coding each page, at least for the majority of things.
Are there any already popular methods for doing some basic validations? Things like MaxLength or Required or MinDate, etc?
Anything more complex than that and I understand I'd have to code it myself, but this site is going to be very large and I really need to find a way to speed some of the basic tasks up.
EDIT
I should point out a couple important facts.
1) Our database already exists and was created by a DBA before developers even came into the picture.
2) There are hundreds of tables and stored procedures already created.
3) When changes will need to be made to the database, they will go through the DBA, who we will not always have instant access too.

First of all, if you use Entity Framework Code First, you don't have a .edxm file storing your models or relationships between them: you just write your POCO (Plain Old CLR Object) classes, and that's it — Code First will figure out the relations between your models based on naming conventions.
To validate your (view) models, I recommend using FluentValidation or DataAnnotations. Both let you define validation rules in one place, either using a fluent validation API in different entity validation classes (FluentValidation) or using attributes to decorate your entity properties (DataAnnotations). The advantage of DataAnnotations over FluentValidation is that you get additional client-side validation out of the box.
Whichever framework you choose, both ship with a bunch of predefined validation rules like Required, Range, or MaxLength (see Fluent Validation for .NET or System.ComponentModel.DataAnnotations Namespace for examples).

I would 100% absolutely still use some POCO class. Download the DBContext generator template that will then create the code first from your model, OR use the Entity Framework Power Tools to reverse engineer an existing database. The down side to these methods though is that you won't get client side validation, only when saving it will you get the validation. You can however still add your validation attributes if you so choose for client side validation in addition using
MetaData Classes to use Data Annotations on your properties and get client side validation using the built in jQuery unobtrusive validation.
However - what we're talking about here goes against the basic design of MVC good practices anyways. Ideally your views should have ViewModels that are at times only a portion of an entity, in this case your validation attributes are still generated on your properties as DataAnnotations in MetaDataClasses.
If you feel this is all too much work and you are fine with just the server validation and have made a decision not to use ViewModels and rely on the Entity Frameworks validation- or use ViewModels and still rely on EF validation, then you will need a handler like the following in your controller (or other layer that has been given access to ModelState) to catch the following exception. Note I use AutoMapper to copy the properties between my ViewModel to my Entity.
Entity Framework Power Tools (right click in your project in solution explorer after its installed and there will be an new 'Entity Framework' menu -> Reverse Engineer - note it doesnt generate [TimeStamp] attributes and forgets to put in schema names - besides that its pretty good)
[HttpPost]
public ActionResult Create(CustomerCreateViewModel customerViewModel)
{
if (ModelState.IsValid)
{
try
{
Mapper.CreateMap();
Customer customer = Mapper.Map(customerViewModel);
var repository = new CustomerRepository(db);
repository.Save(customer);
return RedirectToAction("Edit", new { id=customer.CustomerId});
}
catch (DbEntityValidationException ex)
{
foreach (var error in ex.EntityValidationErrors.First().ValidationErrors)
{
this.ModelState.AddModelError(error.PropertyName, error.ErrorMessage);
}
return View();
}
}
return View(customerViewModel);
}

Jon Galloway has a nice article called Generating EF Code First model classes from an existing database which I think will help you greatly in getting your application up and going from what you described.
Secondly, from having built out our own MVC application so far, I've found that you're really not going to be working directly with Entity Framework models directly very often. Most of the time you'll end up with some type of view model for doing your gets and posts to. Adding DataAnnotations to those class properties will make it very easy for you to do your validations on the client side. Once you have validated the data from the client side and checked your entities against any business rules, you really should then be able to trust the data and use EF to do your basic CRUD work with.
Good luck, and hope this helps you some with your project.

What Marius is trying to tell you is that "Code First" refers to your "model" being defined by fluent code mappings that do not rely on an .edmx file. Therefore, if you're using an .edmx file, you're not doing "code first". You're doing either "Database First" or "Model First" (both of which use the .edmx).
In your case, you already have a database, so you're using the "Database First" approach, using EF 4.1 DbContext. This is not "Code First" (or as you incorrectly stated, Entity First). This is not a semantic quibble, as "code first" has a very specific meaning. This is not it.
Now, on to the rest of your question. Since all your database access has to go through stored procedures, Entity Framework is not a good choice in my opinion. You would be better off using something like nhibernate, which has much better stored procedure support.
EF is intended to represent your relational data model in objects, and generates all its own sql to access and fill these objects. If you have to go through the sprocs, EF will be a constant uphill battle for you.

Related

Who's responsibility should be to paginate controller/domail service/repository?

My question might seem strange for pros but please take to account that I am coming from ruby on rails world =)
So, I am learning ASP.NET Core. And I like what I am seeing in it compared to rails. But there is always that but... Let me describe the theoretical problem.
Let's say I have a Product model. And there are over 9000 records in the database. It is obvious that I have to paginate them. I've read this article, but it seems to me that something is wrong here since the controller shouldn't use context directly. It has to use some repository (but that example might be provided in such a way only for simplicity).
So my question is: who should be responsible for pagination? Should it be the controller which will receive some queryable object from the repository and take only those records it needs? Or should it be my own business service which does the same? Or should the repository has a method like public IEnumerable<Product> ListProducts(int offset, int page)?
One Domain-Driven-Design solution to this problem is to use a Specification. The Specification design pattern describes a query in an object. So you might create a PagedProduct specification which would take in any necessary parameters (pageSize, pageNumber, filter). Then one of your repository methods (usually a List() overload) would accept an ISpecification and would be able to produce the expected result given the specification. There are several benefits to this approach. The specification has a name (as opposed to just a bunch of LINQ) that you can reason about and discuss. It can be unit tested in isolation to ensure correctness. And it can easily be reused if you need the same behavior (say on an MVC View action and a Web API action).
I cover the Specification pattern in the Pluralsight Design Patterns Library.
For first, I would like to remind you that all such examples you linked are overly simplified, so it shouldn't drive you to believe that that is the correct way. Simple things, with fewer abstraction layers are easier to oversee and understand (at least in the case of simple examples for beginners when the reader may not know where to look for what) and that's why they are presented like that.
Regarding the question: I would say none of the above. If I had to decide between them then I would say the service and/or the repository, but that depends on how you define your storage layer, etc.
"None of the above", then what? My preference is to implement an intermediary layer between the service layer and the Web UI layer. The service layer exposes manipulation functionality but for read operations, exposes the whole collection as an IQueryable, and not as an IEnumerable, so that you can utilize LINQ-to-whatever-storage.
Why am I doing this, many may ask. Because almost all the time you will use specialized viewmodels. To display the list of products on an admin page, for example, you would need to display values of columns in the products table, but you are very likely to need to display its category as well. Very rarely is it the case that you need data only from one table and by exposing the items as an IQueryable<T> you get the benefit of being able to do Selects like this:
public IEnumerable<ProductAdminTableViewModel> GetProducts(int page, int pageSize)
{
backingQueryable.Select(prod => new ProductAdminTableViewModel
{
Id = prod.Id,
Category = prod.Category.Name, // your provider will likely resolve this to a Join
Name = prod.Name
}).Skip((page - 1) * pageSize).Take(pageSize).ToList();
}
As commented, by using the backing store as an IQueryable you will be able to do projections before your query hits the DB and thus you can avoid any nasty Select N+1s.
The reason that this sits in an intermediary layer is simply you do not want to add references to your web project neither in your repo nor in your service layer (project) but because of this you cannot implement the viewmodel-specific queries in your service layer simply because the viewmodels cannot be resolved there. This implies that the viewmodels reside in this same project as well, and to this end, the MVC project only contains views, controllers and the ASP.NET MVC-related guttings of your app. I usually call this intermediate layer as 'SolutionName.Web.Core' and it references the service layer to be able to access the IQueryable<T>-returning method.

Entity Framework with 3-tier architecture, different entities across domains

I know the title sounds like a duplicate of quite a few existing posts, but I've read quite a few of them and my situation is actually quite different. I would really appreciate it if anyone experienced with Entity Framework could offer some advice on the best architecture for the following scenario.
I have a wpf application with a 3-tier layout, Data Access Layer, Business Logic Layer, and UI Presentation Layer. The UI uses MVVM. DAL uses Entity Framework. UI and Data Access Layer each have their own models, UIModel and DataModel.
The current design uses a global DbContext for Entity Framework across the application. For a simple update operation, an entity is retrieved from the database as a DataModel, converted into a GUIModel, wired to the ViewModel and View for updates, and converted back into a DataModel to update in the database. And here's the problem: when the new DataModel is created from the conversion, it is no longer related to the original entity retrieved, and Entity Framework cannot perform the update because it now has two duplicate models of the same primary key attached to the same DbContext.
I did a little bit of research and found a couple of possible ways to fix this. One is to use a single model entity across all layers instead of separating GUIModel and DataModel, and break the global DbContext into unit of work. This seems to be a very common design, but my concerns with this approach is that the merge of GUIModel and DataModel violates the separation of responsibilities, and using unit of work required Business Layer to control the lifetime of DbContext, which also blurs the boundary between BLL and DAL.
The second alternative would be to use a local DbContext for every database query with a using block. This seems most memory efficient. But doing it this way makes lazy loading not possible, and eager loading all navigation properties in every query would likely affect performance. Also the short lived DbContexts require working completely in disconnected graph, which becomes quite complicated in terms of change tracking.
A third possibility would be to cache all original DataModels and update on those entities after the update.
I am new to Entity Framework and I'm sure there should be other ways to fix this issue too. I'll really appreciate it if anyone could offer some insights on the best way to approach this.
Better approach is when you are going for update call in your repository firstly get the entity by primary key now you are in dbContext with required entity to be updated then assign updated fields and update the Context.
Here is code:
public void UpdateEntity(Entity updatedEntity)
{
using (var db = new DBEntities())
{
var entity = db.Entities.Find(updatedEntity.Id);
if (entity!= null)
{
entity.Name = updatedEntity.Name;
entity.Description = updatedEntity.Description;
entity.LastModifiedBy = updatedEntity.LastModifiedBy;
entity.Value = updatedEntity.Value;
entity.LastModifiedOn = DateTime.Now;
db.SaveChanges();
}
}
}
I would recommend using separate Business Objects as described in your second alternative. In a multi-tier scenario, you would create reusable objects that support your use case from the UI perspective, modelling the behavior of your business domain (as you call them "GUIModel"). Those models should focus on the behavior of your system and only contain the data needed to support this behavior. This is in direct contrast to entity classes that focus on data.
Example: Northwind Database, Customers Table. The entity would be a class containing all properties of a customer, probably having navigation properties to related things. Would you really want to use this model when you need to display a list of condensed customer information in the dropdown of an auto completion search box? Would you want to use the same model to display customers together with their aggregated invoice data in a grid? You would need to load all customer information together with related invoices to your presentation tier. You probably don't want to do that.
If you had different models for different use cases, things would make more sense from an object oriented point of view:
Class CustomerSearchResult: Id, Name. GetCustomerEdit method.
Class CustomerInvoiceInfo: Id, Name, Aggregated invoice values. GetCustomerEdit method.
Class CustomerEdit: All properties you want to display and edit, timestamp for optimistic concurrency checks. Change tracking logic, validation logic. Methods that model behavior that you need while editing a customer.
Class CustomerEntity: this is your data object that resembles the customers table. You use it as DTO to initialize the other objects from the database or push changes back into the database. You don't send it across the wire.
This way, when you get to the data access layer, you can put your DbContext into using blocks and respect the unit of work pattern. Of course, you will need to reflect changes made to the CustomerEdit instance by creating a new CustomerEntity from it and reattach it to the context as modified:
context.Entry(entity).State = EntityState.Modified;
context.SaveChanges();
This seems complex and burdensome at first, but actually, Entity Framework doesn't contain any magic that helps you much in a disconnected (n-tier) scenario. If you try using things like lazy loading or keeping DbContext instances open all the time, things get out of hand pretty fast.
If you're looking for a framework that helps in creating Business Objects and supports multi tier architectures, take a look into CSLA.net. Disclaimer: Many people here don't like it. It will make things worse if used wrong. Still, it helped me in some projects and I'm happy with it.
You can attach entity to an existing dbContext by using the
following code, also here is a good post about entities states from MSDN
var existingBlog = new Blog { BlogId = 1, Name = "ADO.NET Blog" };
using (var context = new BloggingContext())
{
context.Entry(existingBlog).State = EntityState.Modified;
// Do some more work...
context.SaveChanges();
}
Regarding 3-Tier, I would like to start by giving small description with .net context for each tier
Presentation, This is the layer that return the results to the use and it could be in the form of ASP.Net Website, Windows Forms, Web Api, WCF service or anything else
Business, this should include the domain model of your business, business logic and services that provide business across multiple domain entities
Data access/ persistence, This layer should include the logic to persist and retrieve the domain model into durable media such as DB, file system,...
Generally the common issue here is which model goes into which layer for example should class X goes into presentation or business and I recommend an easy way to help you take your decision which is introducing new sibling layer so ask your self if you would like to build another presentation layer as console instead of windows will you copy and paste that logic into the new layer? if yes then there is good probability that your classes not in the right place.
Finally some concrete recommendations,
Keep each layer having its models as each layer has unique
responsibility, also there are good frameworks that might help you in
mapping between models such as AutoMapper
Don't transfer Entity Framework models across the layers as this will ruin the separation of concerns also it has more and more issues if you enabled lazy loading.
Try to avoid lazy loading unless you know what you are doing, one of the common pitfalls is Select N+1 and here is a good article describing it.
Also if you have a complex business, try to separate between querying the system and updating it by applying CQRS pattern, and there are some frameworks that can help you such as Dapper

ASP.NET SPA with a legacy domain objects

Looking at the Single page application beta in the MVC 4 I don't see how I can use my legacy domain objects as the model. It seems to require that the model use the entity framework to using DbDataController to get the data etc.
I do not understand the entity framework so I am probably missing something.
How can I use my legacy domain (with it's own DAL) in the SPA of MVC 4?
This was answered by somebody else in an ASP.NET forum.
You won't be able to use anything other than EF if you want to use some of these RAD tools. However, SPA builds on top of MVC, so you should be able to build your own version rather easily. The important components would be building a DataController on top of ApiController and a js consumer for the service provided by your DataController. It's possible that if you were to format your models in the same format as the EF output (I think it's just OData) you could use upshot.js, as well and only have to implement a DataController to format your domain models.
I will add the following after working with it for a couple of days that you could, theoretically, use it if the following are handled/fixed by you and future versions of the SPA.
You can create a controller that inherits from System.Web.Http.Data.DataController (and maybe even ApiController). The objects it returns then must just have a property decorated with the System.ComponentModel.DataAnnotations.Key() attribute. I can get the views to work fine but some of the more advance features, like grouping, I am having problems with.
Readonly property will not be returned I guess because of a problem with the current JSON serializer used. Should be fixed.
Of course the entire object will be serialized which can be very problematic if your domain objects are complex with child objects. Especially if some of those objects have serialization issues of their own.
Related to the complex serialization the current JSON serializer cannot handle circular references in the domain objects referenced.
I have also run into problems getting update/deletes/inserts being posted back when using my own Controller that inherits from System.Web.Http.Data.DataController (the examples use DBDataController).

Entity Framework ( Questions on POCO, Context, and DTO)

I have been reading about entity framework over the past couple of days and have managed to get a fair idea of using it but I still have a couple of questions some of which might seem a bit too basic. For perspective I am using entity framework 4.0 in an asp.net web application.If you can answer any of the questions please go ahead.
What advantage do I get by using POCO templates. I understand that if I wish to get persistence ignorance and keep my Entities clear of any information related to storage POCO entities are the way to go. Also I could switch from Entity framework to say NHibernate with relative ease when using POCO entities? Apart from loose coupling is there any significant reason for me to go towards POCO entities. Also if I do use POCO do I end up losing anything. I still get change tracking and lazy loading with the help of proxies?
Is it normal practice to use the Entities of the EF model as Data transfer Objects or Business Objects. i.e for example I have a separate class library for my entity model.Supposing I am using MVP , where I want a list of Employee's in a company. The presenter would request my business logic functions which would query the entity model for the list of Employee's and return the list of entities to the presenter. In this case my presenter would need to have a reference to the EF model. Is this the correct way? In the case of my asp.net web applciation it shouldnt be a problem but if I am using web services how does this work? Is this the reason to go towards POCO entities?
Supposing The Employee entity has a navigation property to a company table. If I use and wrap the data context in an 'using' block , and try to access the navigation property in the BL I am assuming I would get an exception. Would I also get an exception if I turned off lazyloading and used the 'include' linq query to get the entity? On a previous post someone recommended I use an context per request implying that the context remains active even when I am in the BL. I am assuming I would still need to detach the object and attach it to the context on my next request if I wish to persist any changes I make? or Instead should I just query for the object again with the new context and update it?
This question has more to do with organizing files/best practices and is a followup to a question i posted earlier. When I am using separate files based on entities to organize my data access layer, what is the best practice to organize my queries involving joins between multiple tables. I am still a bit hazy on organization. Have tried searching online but havent had much help.
Terrific question. My first recommendation is to think in patterns. With that said...
You pretty much nailed the advantages of using POCO. There are some distinct advantages to decoupling your business objects (POCO entities) from your data access layer. But the primary reason is like you said the ability to change or modify layers below. However using POCO you are essentially following the Code First (CF) approach. Personally, I consider it Code In Parallel depending upon your software development life cycle. You still have all the bells and whistles that data or model first approach have and some since you can extend the DbContext which is ObjectContext under the hood. I read an article, which I cannot seem to find, that CF is the future of Entity Framework. Lastly the nice thing with POCO is you are able to incorporate validation rules here or else where. You can also provide projections. Lets say you have Date of Birth but you want an Age property as well. That now becomes a no brainer as the Age property is ignored when mapping to the database.
Personally I create my own business objects (POCO) for large projects that tend to have a life of its own where change is a way of life. Another thought is scalability and maintainability. What if down the road I choose to split functionality between applications where, like you mentioned web services, functionality is now delivered from two disparate locations. If you have encapsulated your business objects and DAL within the same code block separation or scalability has now become a bit more complex. However, consider the project. It may be small with very little future change so no need to throw a grenade to kill a fly. At which time data first might be the way to go and let edmx file represent your objects. So don't marry yourself to one technology or one methodology/pattern. Do what makes sense for your time and business.
Using statements are perfectly fine. In fact I've recently been turned on to then wrapping that within a TransactionScope. If an error occurs rollbacks are inherent. Next, something to consider is the UnitOfWork. UnitOfWork pattern encapsulates a snapshot of what needs to be performed where the Data Context is the boundaries from which you work within. For each UnitOfWork you have a subject for which work is to be performed on. For example an Employee. So if you are to save Employee information to keep it simple you would make a call to the BL service or repository (which ever). There you pass in the Employee Id, perform some work under that UnitOfWork where it is either instantiated in the constructor or using Dependency Injections (DI or IoC). Easy starter is StructureMap. There the service makes the necessary calls to your UnitOfWork (DbContext) then returns control back upstream (e.g. UI).
The best way to learn here is to view others code. I'd start with some Microsoft examples. I'd start with Nerd Dinner (http://nerddinner.codeplex.com/) then build off that.
Additional Reading:
Use prototype pattern or not
http://weblogs.asp.net/manavi/archive/2011/05/17/associations-in-ef-4-1-code-first-part-6-many-valued-associations.aspx
[EDIT]
NightHawk457, I'm terribly sorry for not responding to your questions. Hopefully you figured it out but for future readers...
To help everyone visualize, imagine the below Architecture using the Domain Model and Repository as an example. Remember, there are many ways to skin a cat so take this and make it your own and don't forget my Grenade comment above.
Data Layer (Data Access): MyDbContext : DbContext, IUnitOfWork, where IUnitWork contracts the CRUD operations.
Data Repository (Data Access / Business Logic): MyDomainObjectRepository : IMyDomainObjectRepository, which receives IUnitOfWork by Factory class or Dependency Injection. Calls MyDomainObject validation on CRUD operations.
Domain Model (Business Logic): MyDomainObject using [Custom] Validation Attributes. Read this for pros/cons.
MVVM / MVC / WCF (Presentation / Service Layers): What ever additional layers you chose, you now have access to your data which is wrapped nicely in smaller modules who are self encapsulating of their function. The presentation layer (e.g. ViewModel, Controller, Code-Behind, etc.) can then receive an IMyObjectRepository by a Factory class or by Dependency Injection.
Tips:
Pass connection string into MyDbContext so you can reuse MyDbContext.
MySql does not play well with System.Transactions.TransactionScope, example. I don't recall exactly but it was something MySql did not support. This makes Testing a bit difficult since we have created this level of separation.
Create a Test project for each layer and at the minimum test general functionality/rules.
Each Domain Object should extend base object with ID field at minimum. Also do not implement Key attributes here. Domain Object should not describe architecture but rather the specific data as an entity. Even on Code First this can be achieved by the Fluent API.
Think generics when creating MyDbContext. ;) Read Diego's post.
In ASP.NET, the repositories are nice to use with ObjectDataSources.
As you can see, there is clear separation of roles where IUnitOfWork and IMyDomainObjectRepository are the Interfaces which expose the above layers functionality. And as an example, IUnitOfWork could be NHibernate, Entity Framework, LinqToSql or ADO.NET where a change to the factory class or dependency injection registration is all that has to change. FYI, I've heard the Repository called the Service Layer as well. Personally I like the first name to not be confused with Web Services. The next big take away from this structure is realizing the scope for you Database Context (IUnitOfWork). A simple example would be a ASP.NET page where for each page there is one and only one IUnitOfWork for either each repository or for that scope of work. Same holds true for ViewModels, Controllers, etc. So let's say you need to utilize two repositories, EmployeeRepository and HRRepository. You then could share the IUnitOfWork between both or not. To cross page, ViewModel or Controller boundaries, we use the ID for entities where they are then pulled from the DB and work is performed. You could alternatively pass a DTO across boundaries and attach to the context but then you begin losing separation of layers.
To continue, POCO classes do not have to be auto generated. In fact you can create your Entity Classes from scratch and perform the mapping in your extended DbContext class inside the OnModelCreating(DbModelBuilder mb) method. Start here, then here and note the Additional Resources, google Fluent API and read this post by Diego.
As for validation, this is an interesting point because it would be GREAT if all Business Rules could be validated in one location. Well, as we all know that doesn't work real well. So here is my recommendation, keep all data level validation (i.e. required, range, format, etc.) with data annotation as much as possible in the domain object and leave process validation in the Repository with clear roles of the Repository (i.e. if (isEmployee) do this, else that). I say clear, such that you do not want to add an Employee in two different Repositories where validation has to be duplicated. To call the validation, start here. Capture the ValidationResults and send upstream with a MyRepositoryValidationException which contains a collection of validations errors (e.g. Employee is required) which can be presented to the presentation layer. With all that said, don't forget to perform validation at the presentation layer. You don't want post backs to make sure an Employee has a valid Email, for example.
Just remember to balance time and effort with complexity. For something simple, use Data First or Model First with your EDMX file. Then lay a repository on top of that which also contains all the validation rules.

How to simply update entity in entity framework?

I'm writing a custom .NET MembershipProvider (not the built in one) and trying to update using Entity Framework. But of course i have no access to (Try)UpdateModel. How can i update it? Thanks in advance.
You can't do this kind of thing with the ASP.NET Membership Provider, that is, write custom updates to the tables.
If it were that easy, less people would have issues/problems with it. =)
Don't even bother adding the ASP.NET Membership SQL Tables onto your EDMX - you won't know the relationships or how the tables really work together. Forget about trying to represent it as a "Model".
My advice is don't try and bind to the MembershipProvider as a Model (i.e dont create a strongly typed view), just call the Membership methods directly from your controller.
This is where we start to miss the 'drag and drop' of Web Forms, can't drop on a ChangePassword control. =)
Your best bet would be to create a regular view (not strongly typed), then have regular buttons that post to your controller methods.
Don't try and pass through the object as a model, get the fields in the Request.Form collection.
[HttpPost]
public ActionResult ChangePassword()
{
string userName = Request.Form["userName"];
string passWord = Request.Form["passWord"];
MembershipProvider.ChangePassword(userName, password);
return View("ChangePasswordSuccess");
}
The above code would be (roughly) the equivalent of passing through a strongly typed User object, changing the password and calling UpdateModel.
Of course, you could implement your own membership provider, but i dont believe implementing a custom provider just to make your code "easier" should be the driver, because unless coded properly (which is not easy to do), you compromise a lot of the built-in security features and wealth of account management options of the ASP.NET Membership provider that we take for granted.
To do this with the default provider is a little complicated, however what would be much easier would be to create your own CustomMembershipProvider as outlined here:
Implementing A Membership Provider
As you can do this to your OWN account model, you can code the repository/DAL code however you choose, and use standard EF practices and conventions, allowing you to perform simple and strongly mapped operations such as UpdateModel.
A similar question was asked here.
Here is a CodeProject sample app that could get you started that uses EF and Microsoft's MembershipProvider. There is a class they built that inherits from MembershipProvider.
http://www.codeproject.com/KB/web-security/EFMembershipProvider.aspx

Resources