I have a web solution (in VS2010) with two sub-projects:
Domain which holds the Model classes (mapped to database tables via Entity Framework) and Services which (besides other stuff) are responsible for CRUD operations
WebUI which references the Domain project
For the first pages I've created I have used the Model classes from the Domain project directly as Model in my strongly typed Views because the classes were small and I wanted to display and modify all properties.
Now I have a page which should only work with a small part of all properties of the corresponding Domain Model. I retrieve those properties by using a projection of the query result in my Service class. But I need to project into a type - and here come my questions about the solutions I can think of:
I introduce ViewModels which live in the WebUI project and expose IQueryables and the EF data context from the service to the WebUI project. Then I could directly project into those ViewModels.
If I don't want to expose IQueryables and the EF data context I put the ViewModel classes in the Domain project, then I can return the ViewModels directly as result of the queries and projections from the Service classes.
In addition to the ViewModels in the WebUI project I introduce Data transfer objects which move the data from the queries in the Service classes to the ViewModels.
Solution 1 and 2 look like the same amount of work and I am inclined to prefer solution 2 to keep all the database concerns in a separate project. But somehow it sounds wrong to have View-Models in the Domain project.
Solution 3 sounds like a lot more work since I have more classes to create and to care about the Model-DTO-ViewModel mapping. I also don't understand what would be the difference between the DTOs and the ViewModels. Aren't the ViewModels exactly the collection of the selected properties of my Model class which I want to display? Wouldn't they contain the same members as the DTOs? Why would I want to differentiate between ViewModels and DTO?
Which of these three solutions is preferable and what are the benefits and downsides? Are there other options?
Thank you for feedback in advance!
Edit (because I had perhaps a too long wall of text and have been asked for code)
Example: I have a Customer Entity ...
public class Customer
{
public int ID { get; set; }
public string Name { get; set; }
public City { get; set; }
// ... and many more properties
}
... and want to create a View which only shows (and perhaps allows to edit) the Name of customers in a list. In a Service class I extract the data I need for the View via a projection:
public class CustomerService
{
public List<SomeClass1> GetCustomerNameList()
{
using (var dbContext = new MyDbContext())
{
return dbContext.Customers
.Select(c => new SomeClass1
{
ID = c.ID,
Name = c.Name
})
.ToList();
}
}
}
Then there is a CustomerController with an action method. How should this look like?
Either this way (a) ...
public ActionResult Index()
{
List<SomeClass1> list = _service.GetCustomerNameList();
return View(list);
}
... or better this way (b):
public ActionResult Index()
{
List<SomeClass1> list = _service.GetCustomerNameList();
List<SomeClass2> newList = CreateNewList(list);
return View(newList);
}
With respect to option 3 above I'd say: SomeClass1 (lives in Domain project) is a DTO and SomeClass2 (lives in WebUI project) is a ViewModel.
I am wondering if it ever makes sense to distinguish the two classes. Why wouldn't I always choose option (a) for the controller action (because it's easier)? Are there reasons to introduce the ViewModel (SomeClass2) in addition to the DTO (SomeClass1)?
I would solve your problem by using an auto-mapping tool (like AutoMapper) to do the mapping for you. In cases where the mapping is easy (for example if all properties from one class should be mapped to properties with the same name on another class) AutoMapper will be able to do all the hook-up work for you, and you'll have to give a couple of lines of code to note that there should be a map between the two at all.
That way, you can have your entities in Domain, and a couple of view model classes in your WebUI, and somewhere (preferrably in WebUI or a sub namespace of the same) define maps between them. Your view models will in effect be DTOs, but you won't have to worry much about the conversion process between the domain and your DTO classes.
Note: I would strongly recommend against giving your Domain entities straight to the views of your MVC web UI. You don't want EF to "stick around" all the way to the front-end layer, in case you later want to use something other than EF.
introduce ViewModels which live in the
WebUI project and expose IQueryables
and the EF data context from the
service to the WebUI project. Then I
could directly project into those
ViewModels.
The trouble with this is you soon run into problems using EF trying to 'flatten' models. I encountered something similar when I had a CommentViewModel class that looked like this:
public class CommentViewModel
{
public string Content { get; set; }
public string DateCreated { get; set; }
}
The following EF4 query projection to the CommentViewModel wouldn't work as the couldn't translate the ToString() method into SQL:
var comments = from c in DbSet where c.PostId == postId
select new CommentViewModel()
{
Content = c.Content,
DateCreated = c.DateCreated.ToShortTimeString()
};
Using something like Automapper is a good choice, especially if you have a lot of conversions to make. However, you can also create your own converters that basically convert your domain model to your view model. In my case I created my own extension methods to convert my Comment domain model to my CommentViewModel like this:
public static class ViewModelConverters
{
public static CommentViewModel ToCommentViewModel(this Comment comment)
{
return new CommentViewModel()
{
Content = comment.Content,
DateCreated = comment.DateCreated.ToShortDateString()
};
}
public static IEnumerable<CommentViewModel> ToCommentViewModelList(this IEnumerable<Comment> comments)
{
List<CommentViewModel> commentModels = new List<CommentViewModel>(comments.Count());
foreach (var c in comments)
{
commentModels.Add(c.ToCommentViewModel());
}
return commentModels;
}
}
Basically what I do is perform a standard EF query to bring back a domain model and then use the extension methods to convert the results to a view model. For example, the following methods illustrate the usage:
public Comment GetComment(int commentId)
{
return CommentRepository.GetById(commentId);
}
public CommentViewModel GetCommentViewModel(int commentId)
{
return CommentRepository.GetById(commentId).ToCommentViewModel();
}
public IEnumerable<Comment> GetCommentsForPost(int postId)
{
return CommentRepository.GetCommentsForPost(postId);
}
public IEnumerable<CommentViewModel> GetCommentViewModelsForPost(int postId)
{
return CommentRepository.GetCommentsForPost(postId).ToCommentViewModelList();
}
Talking about Models, ViewModels and DTOs is confusing, personally I don't like to use these terms. I prefer to talk about Domain Entities, Domain Services, Operation Input/Result (aka DTOs). All of these types live in the Domain layer. Operations is the behavior of Entities and Services. Unless you are building a pure CRUD application the presentation layer only deals with Input/Result types, not Entities. You don't need additional ViewModel types, these are the ViewModels (in other words, the Model of the View). The View is there to translate the Operation Results to HTML, but the same Result could be serialized as XML or JSON. What you use as ViewModel is part of the domain, not the presentation layer.
Related
I am wondering if there is a best practice for creating a REST API with ASP.NET MVC 3? At the moment I am thinking of creating a controller for each version of the REST API. For example, so far I have:
public class V1Controller : Controller
{
public V1Controller()
{
}
public ActionResult GetUser(string userId, IUserRepository userRepostory)
{
//code to pull data and convert to JSON string
return View("Results");
}
public ActionResult GetUsersByGroup(string groupId, IUserRepository userRepostory)
{
//code to pull data and convert to JSON string
return View("Results");
}
}
Then for the views I overwrite the _ViewStart.cshtml to remove the layout and then I have Results.cshtml that just outputs the data that is formatted in the controller action, right now JSON. Having every single REST call in one controller seems like a bit too much but it is the best way I can think of so that I can keep clean separate versions of the API so that when it comes to creating version 2 of the API, I can create a V2Controller and not break the existing API to give people time to switch over to the new API.
Is there a better way to create a REST API with ASP.NET MVC 3?
I was able to find a decent solution using MVC's use of Areas.
First, I wanted to have my API follow this URL Definition:
http://[website]/[major_version]_[minor_version]/{controller}/{action}/...
I also wanted to break up the different versions in separate Project files and use the same Controller names in each version:
"../v1_0/Orders/ViewOrders/.." => "../v2_3/Orders/ViewOrders/.."
I searched around and found a workable solution with the use of MVC Areas.
I created a new project in my solution called "Api.Controllers.v1_0" and, as a test, put a SystemController.cs file in there:
using System.Web.Mvc;
namespace Api.Controllers.v1_0
{
public class SystemController : Controller
{
public ActionResult Index()
{
return new ContentResult() {Content = "VERSION 1.0"};
}
}
}
I then added a v1_0AreaRegistration.cs file:
using System.Web.Mvc;
namespace Api.Controllers.v1_0
{
public class v1_0AreaRegistration : AreaRegistration
{
public override string AreaName
{
get{ return "v1_0";}
}
public override void RegisterArea(AreaRegistrationContext context)
{
context.MapRoute(
"v1_0",
"v1_0/{controller}/{action}/{id}",
new { controller = "System", action = "Index", id = UrlParameter.Optional }
);
}
}
}
I walked through the same steps above for a "..v1_1" project with the corresponding files in there, added the projects as references into my "Api.Web" MVC project and was off and running.
If all you are returning is JSON, you do not need a view. Jusr return
new JsonResult(){Data = Data};
Look in here.
Also in terms of versioning, versions can be implemented as different controllers or as extra methods in the same controller. But without knowing why you would need versions and why your clients (which I assume are browsers) would need to know about versioning is not clear from your question.
A controller such as the one you posted in your example code should always keep that methods that you have now for instance GetUsersByGroup() with the same signature. I don't see how there could be a different version of that method.
The inputs are group and repository (which I believe comes from DI). The output is a list of users in JSON format. That's all that matters to the users of the API. What you do inside this method is no one's business.
You should think more of inputs and outputs. You shouldn't be changing the signatures of existing actions unless it is really neccessary to do so.
Think of the controller class in terms of implementing the interface. You have an interface and controller class is it's implementation (I mean you don't need to have it but just think of it in that way). You will rarely change the interface once one or several classes implement it. But you might add the methods to it. And that requires only changes in implementing classes - it does not break the functionality of the API and everyone who's using it will be able to continue using it.
I'm experimenting with ASP.NET MVC2, in particular viewmodels and partials. My question is: Is it ‘valid’ or ‘right’ to have your partials strongly typed against an interface and the have your viewmodels implement that interface if the view uses the partial?
To illustrate, say I have a Product form partial (strongly typed against IProductFormViewModel) which is used in both the Edit and Create views. These views are strongly typed against ProductEditViewModel and ProductCreateViewModel which implement the IProductFormViewModel.
The benefit being that the corresponding POST actions for Create and Edit take ProductCreateViewModel & ProductEditViewModel objects respectively.
Edit:
If the partial has its own dedicated viewmodel (ProductFormViewModel) and each of the ProductEditViewModel & ProductCreateViewModel expose a property of type ProductFormViewModel which is passed to the partial, then when the form is submitted the model binding of ProductEditViewModel & ProductCreateViewModel doesn't work as the Edit and Create actions expect their respective object types...thats the reason for the approach.
You may have problems when your interfaces for your different partials expose the same property name e.g. name. You would then have to explicity implement the interface which would then cause problems with your model binding.
Otherwise it should work.
Yes, this seems a valid approach.
Interfaces are essentially contracts that need to be fulfilled by implementing classes. but in case of view engines i don't see any specific benefit of having you viewmodels implement an interface because at the end you have to instantiate the viewmodel in controller and pass it to the view and suppose you change implementation of ProductFormViewModel or EditProductViewModel, u still have to instantiate (populate) the object in controller and pass it to the view. so it does not achieve the same purpose as we do in repository pattern in conjunction with dependency injection. if you can tell what exactly u are trying to achieve by this approach, we might help.
Your approach is fine.
Or you could have a model specific to your partial and use composition instead, e.g.:
public class AddressModel
{
public string Address { get; set; }
public string Code { get; set; }
}
public class PersonModel
{
public string Name { get; set; }
public AddressModel Address { get; set; }
}
Then when redering your partial you pass it the correct model.
HTH
I've been playing around with ASP.NET MVC for the past few weeks. I've got a simple web application with a form which contains a number of drop down lists.
The items in the drop down lists are stored in a database, and I'm using LINQ to SQL to retrieve them.
My question is - where's the appropriate place to put this code? From what I've read so far, it seems that it's advisible to keep the Controller 'thin', but that's where I currently have this code as it needs to be executed when the page loads.
Where should I be putting DB access code etc.? I've included an excerpt from my controller below.
Thanks.
public ActionResult Index()
{
TranslationRequestModel trm = new TranslationRequestModel();
// Get the list of supported languages from the DB
var db = new TransDBDataContext();
IEnumerable<SelectListItem> languages = db.trans_SupportedLanguages
.Select(c => new SelectListItem
{
Value = Convert.ToString(c.ID),
Text = c.Name.ToString()
});
ViewData["SourceLanguages"] = languages;
ViewData["TargetLanguages"] = languages;
return View();
Your database access code should be in a repository. Example:
public interface ITranslationRepository
{
Translation GetTransaltion();
}
and the controller would use this repository:
public class TransaltionController : Controller
{
private readonly ITranslationRepository _repository;
public TransaltionController(ITranslationRepository repository)
{
_repository = repository;
}
public ActionResult Index()
{
// query the repository to fetch a model
Translation translation = _repository.GetTransaltion();
// use AutoMapper to map between the model and the view model
TranslationViewModel viewModel = Mapper.Map<Translation, TranslationViewModel>(model);
// pass the view model to the view
return View(viewModel);
}
}
So the basic idea is the following:
The controller queries a repository to fetch a model
The controller maps this model to a view model (AutoMapper is great for this job)
The controller passes the view model to the view
The view is strongly typed to the view model and uses it to edit/display
As far as the implementation of this repository is concerned feel free to use any data access technology you like (EF, NHibernate, Linq to XML, WCF calls to remote resources over the internet, ...)
There are the following advantages:
The controller logic is completely decoupled from the data access logic
Your controllers can be unit tested in isolation
Your models are not littered with properties that should belong to the UI layer (such as SelectListItem) and thus are reusable across other types of application than ASP.NET MVC.
The view model is a class which is specifically tailored to the needs of the view meaning that it will contain specific formatted properties and the view code will be extremely readable.
Your views are strongly typed => no more ViewData and ugly magic strings
Suggest that your data-access code should be contained in its own project/assembly. That is referenced by the UI tier (ASP.NET MVC application). That'll help achieve the goal of keeping your Controllers thin, and keep all the data access code out of your MVC UI project.
That typically leads to another question/discussion about domain entities: when mapping to the data store. Some architects like to have the entities in their own separate assembly. This encourages reuse in other applications. Some like to keep the entity model and data access code in the same project/assembly. This is totally up to you and your environment.
For an example, let's say it's a billing application; holding customers, invoices, etc.
Your implementation will be different, depending on your data access strategy (an ORM like LINQ To SQL, EF, nHibernate, SubSonic, or plain old ADO.NET, or reading from a flat file).
// Assembly: InvoicingDL
public class CustomerRepo
{
public IQueryable<Customer> ListCustomers()
{
return MyDatabase.Customers(); //however you'd get all your customers
}
//etc
}
// Assembly: InvoicingDL
public class InvoicingRepo
{
public IQueryable<Invoice> GetCustomerInvoices(int custID)
{
return MyDatabase.Invoices.Where(i=>i.CustomerID==custID);
}
//etc
}
Check out the Repository pattern
https://web.archive.org/web/20110503184234/http://blogs.hibernatingrhinos.com/nhibernate/archive/2008/10/08/the-repository-pattern.aspx
http://www.mindscapehq.com/blog/index.php/2008/05/12/using-the-unit-of-work-per-request-pattern-in-aspnet-mvc/
The idea is you abstract your data access in something called a repository that returns domain objects. Your controller can then use this repository to get the appropriate objects from the database and assign them to the model.
I've been getting several errors:
cannot add an entity with a key that is already in use
An attempt has been made to attach or add an entity that is not new, perhaps having been loaded from another datacontext
In case 1, this stems from trying to set the key for an entity versus the entity. In case 2, I'm not attaching an entity but I am doing this:
MyParent.Child = EntityFromOtherDataContext;
I've been using using the pattern of wrap everything with a using datacontext. In my case, I am using this in a web forms scenario, and obviously moving the datacontext object to a class wide member variables solves this.
My questions are thus 2 fold:
How can I get rid of these errors and not have to structure my program in an odd way or pass the datacontext around while keeping the local-wrap pattern? I assume I could make another hit to the database but that seems very inefficient.
Would most people recommend that moving the datacontext to the class wide scope is desirable for web pages?
Linq to SQL is not adapted to disconnected scenarios. You can copy your entity to a DTO having a similar structure as the entity and then pass it around. Then copy the properties back to an entity when it's time to attach it to a new data context. You can also deserialize/reserialize the entity before attaching to a new data context to have a clean state. The first workaround clearly violates the DRY principle whereas the second is just ugly. If you don't want to use any of these solution the only option left is to retrieve the entity you're about to modify by its PK by hitting the DB. That means an extra query before every update. Or use another ORM if that's an option for you. Entity Framework 4 (included with .NET 4) with self-tracking entities is what I'm using currently on a web forms project and everything is great so far.
DataContext is not thread-safe and should only be used with using at the method level, as you already do. You can consider adding a lock to a static data context but that means no concurrent access to the database. Plus you'll get entities accumulated in memory inside the context that will turn into potential problems.
For those that came after me, I'll provide my own take:
The error "an attempt has been made to add or attach an entity that is not new" stems from this operation:
Child.Parent = ParentEntityFromOtherDataContext
We can reload the object using the current datacontext to avoid the problem in this way:
Child.Parent = dc.Entries.Select(t => t).Where(t => t.ID == parentEntry.ID).SingleOrDefault();
Or one could do this
MySubroutine(DataContext previousDataContext)
{
work...
}
Or in a web forms scenario, I am leaning to making the DataContext a class member such as this:
DataContext _dc = new DataContext();
Yes, the datacontext is suppose to represent a unit of work. But, it is a light-weight object and in a web forms scenario where a page is fairly transient, the pattern can be changed from the (using dc = new dc()) to simply using the member variable _dc. I am leaning to this last solution because it will hit the database less and require less code.
But, are there gotchas to even this solution? I'm thinking along the lines of some stale data being cached.
What I usually do is this
public abstract class BaseRepository : IDisposable
{
public BaseRepository():
this(new MyDataContext( ConfigurationManager.ConnectionStrings["myConnection"].ConnectionString))
{
}
public BaseRepository(MyDataContext dataContext)
{
this.DataContext = dataContext;
}
public MyDataContext DataContext {get; set;}
public void Dispose()
{
this.DataContext.Dispose();
}
}
Then imagine I have the following repository
public class EmployeeRepository : BaseRepository
{
public EmployeeRepository():base()
{
}
public EmployeeRepository(MyDataContext dataContext):base(dataContext)
{
}
public Employee SelectById(Guid id)
{
return this.DataContext.Employees.FirstOrDefault(e=>e.Id==id);
}
public void Update(Employee employee)
{
Employee original = this.Select(employee.Id);
if(original!=null)
{
original.Name = employee.Name;
//others
this.DataContext.SubmitChanges();
}
}
}
And in my controllers (I am using asp.net mvc)
public ActionResult Update(Employee employee)
{
using(EmployeeRepository employeeRepository = new EmployeeRepository())
{
if(ModelState.IsValid)
{
employeeRepository.Update(employee);
}
}
//other treatment
}
So the datacontext is properly disposed and I can use it across the same instance of my employee repository
Now imagine that for a specific action I want the employee's company to be loaded (in order to be displyed in my view later), I can do this:
public ActionResult Select(Guid id)
{
using(EmployeeRepository employeeRepository = new EmployeeRepository())
{
//Specifying special load options for this specific action:
DataLoadOptions options = new DataLaodOptions();
options.LoadWith<Employee>(e=>e.Company);
employeeRepository.DataContext.LoadOptions = options;
return View(employeeRepository.SelectById(id));
}
}
I have a class called 'Article' in a project called 'MyProject.Data', which acts as the data layer for my web application.
I have a separate project called 'MyProject.Admin', which is a web-based admin system for viewing/editing the data, and was build using ASP.NET Dynamic Data.
Basically I want to extend the Article class, using a partial class, so that I can augment one of its properties with a "UIHint" extender, which will allow me to replace the normal multi-line textbox with an FCKEdit control.
My partial class and extender would look like this:
[MetadataType(typeof(ProjectMetaData))]
public partial class Project
{
}
public class ProjectMetaData
{
[UIHint("FCKeditor")]
public object ItemDetails { get; set; }
}
Now this all works fine if the partial class is in the same project as the original partial class - i.e. the MyProject.Data project.
But UI behavior shouldn't sit in the Data layer, but rather, in the Admin layer. So I want to move this class to MyProject.Admin.
However, if I do that, the functionality is lost.
My fundamental question is: can I have 2 partial classes in separate projects, but both referring to the same "class"?
If not, is there a way to accomplish what I'm trying to do, without mixing data-layer logic with UI logic?
No, you cannot have two partial classes referring to the same class in two different assemblies (projects). Once the assembly is compiled, the meta-data is baked in, and your classes are no longer partial. Partial classes allows you to split the definition of the same class into two files.
As noted, partial classes is a compile-time phenomenon, not runtime. Classes in assemblies are by definition complete.
In MVC terms, you want to keep view code separate from model code, yet enable certain kinds of UI based on model properties. Check out Martin Fowler's excellent overview of the different flavours of MVC, MVP and whatnot: you'll find design ideas aplenty. I suppose you could also use Dependency Injection to tell the UI what kind of controls are viable for individual entities and attributes.
Your aim of separating concerns is great; but partial classes were intended to address entirely different issues (primarily with code generation and design-time modelling languages).
Extension methods and ViewModels are the standard way to extend data-layer objects in the frontend like this:
Data Layer (class library, Person.cs):
namespace MyProject.Data.BusinessObjects
{
public class Person
{
public string Name {get; set;}
public string Surname {get; set;}
public string Details {get; set;}
}
}
Display Layer (web application) PersonExtensions.cs:
using Data.BusinessObjects
namespace MyProject.Admin.Extensions
{
public static class PersonExtensions
{
public static HtmlString GetFormattedName(this Person person)
{
return new HtmlString(person.Name + " <b>" + person.Surname</b>);
}
}
}
ViewModel (for extended view-specific data):
using Data.BusinessObjects
namespace MyProject.Admin.ViewModels
{
public static class PersonViewModel
{
public Person Data {get; set;}
public Dictionary<string,string> MetaData {get; set;}
[UIHint("FCKeditor")]
public object PersonDetails { get { return Data.Details; } set {Data.Details = value;} }
}
}
Controller PersonController.cs:
public ActionMethod Person(int id)
{
var model = new PersonViewModel();
model.Data = MyDataProvider.GetPersonById(id);
model.MetaData = MyDataProvider.GetPersonMetaData(id);
return View(model);
}
View, Person.cshtml:
#using MyProject.Admin.Extensions
<h1>#Model.Data.GetFormattedName()</h1>
<img src="~/Images/People/image_#(Model.MetaData["image"]).png" >
<ul>
<li>#Model.MetaData["comments"]</li>
<li>#Model.MetaData["employer_comments"]</li>
</ul>
#Html.EditorFor(m => m.PersonDetails)
Add the base file as a linked file into your projects. It's still partial but it allows you to share it between both projects, keep them synchronized and at the same time have version/framework specific code in the partial classes.
I've had similar issues with this. I kept my partial classes in my Data project so in your case the 'MyProject.Data'. MetaDataClasses shouldn't go in your Admin project as you will create a circular references other wise.
I added a new Class Lib project for my MetaDataClasses e.g. 'MyProject.MetaData' and then referenced this from my Data project
Perhaps use a static extension class.
Just add class file as link in your new project and keep the same namespace in your partial class.
Since 2019 you can have 2 parts of a partial class in different assemblies using a trick. This trick is explained and demonstrated in this article:
https://www.notion.so/vapolia/Secret-feature-Xamarin-Forms-control-s-auto-registration-1fd6f1b0d98d4aabb2defa0eb14961fa
It uses at its core the MSBuild.Sdk.Extras extension to SDK like projects, which solves the limitation of having all partial parts of a class in the same assembly, by using one project with multiple simultaneous targets, effectively creating multiples assemblies in one compilation of the same project.
I may be mistaken here, but could you not simply define the ProjectMetaData class in your MyProject.Admin project?