Right place to initialize an object in ASP.NET MVC - asp.net

I am new to the MVC way of programming so please bear with my basic question !
I have a Status class with a default constructor (in an ASP.NET MVC application).
public Status()
{
this.DatePosted = DateTime.Now;
}
I noticed Fluent NHibernate calls this constructor each time it fetched a list of existing Status objects from the database. Hence, the constructor does not seem like the right place to initialize the date.
Where should I move this initialization ? Moving it to the Controller (Add action of Status controller) also seems to violate the principle that the Controller should not make any business decisions. Should I move it to the Status DAO then ? (In traditional ASP.NET Web Form applications I worked with, a DAO simply accepted a business object and saved it to the database and did not contain any logic)
I would like to know the right way to accomplish this. Is there another layer I am missing here where this initialization should take place?

I noticed Fluent NHibernate calls this
constructor each time it fetched a
list of existing Status objects from
the database. This does not seem right
This is exactly what is supposed to be happening. Why wouldn't an ORM call the default constructor for an object? I think every hand rolled DAL and ORM in the world would trigger DatePosted to be reset because thats just how constructors work.
Your DatePosted property should probably set via ModelBinding or manually in the controller and not be part of a constructor.

Related

entity framework core exception handling db first

Background
With ef core code first approach, validation is robust and simple: https://learn.microsoft.com/en-us/aspnet/core/tutorials/first-mvc-app/validation
With the database first approach, it seems like any validation is happening behind the scenes by the database when dbcontext.SaveChanges(); is called. What's worse, these exceptions are nebulous and entirely unhelpful, for example, SqlException: String or binary data would be truncated can be thrown if any of the string properties of any of the entities have too many chars (ours is a legacy app riddled with char(10) and such), or even if a key that is a string is left null.
Question
I want to know if there is any reasonable or accepted way of enforcing the validation. I've found this question which might help debugging, but I would like to enforce the constraints in code
Is there any better method than changing every auto property to one that throws if it's constraints aren't met?
EntityFramework Core does not enforce any validation at all. The validation rules you see in the example are enforced by MVC and not EF. One of the main reason for EF Core to remove validation check was that only. Validation are run in UI and then in EF and then again in database which is just redundant. Hence client side validation is left to the front-end (MVC in this case) and server side is done by database engine.
When you use database first approach, EF core does not generate any annotation for validation because it does not reason about them anyway. That means you would get only server side validation which means error during SaveChanges.
The only way to enforce constraint in the code (client side) is to write those annotations so that MVC can enforce them or write custom code to deal with it. The whole validation mechanism is transparent to EF.
I ended up going with a psuedo extension to the generator tooling. Since the DBContext is a partial class, I made a new class that has a main
public partial class DBContext{
public static void Main(string[]args){
DBContext context = new DBContext();
var modelbuilder = new Microsoft.EntityFrameworkCore.ModelBuilder(new Microsoft.EntityFrameworkCore.Metadata.Conventions.ConventionSet());
context.OnModelCreating(modelbuilder);
IMutableModel model=modelbuilder.Model;
from there I used Linq to transform the various info about each entity's properties and the annotations on them into List<KeyValuePair<string,List<KeyValuePair<Regex,string>>>> where the first pair's key is the entity name, and the value is a list of find and replace pairs to edit the code that had already been generated with the corresponding validation, one per property. Then all I had to do was abuse the fact that the tooling generates the classes in <className>.cs files, and iterate over my list, executing the replacements for each entity source code file.
I'd have preferred doing something a little less hacky, because I'm relying on format that the ef tooling outputs, but it works

Where should EntityManager::persist() and EntityManager::flush() be called

I'm developing a medium scale application using Symfony2 and Doctrine2. I'm trying to structure my code according to the SOLID principles as much as possible. Now here is the question:
For creating new Entities, I use Symfony Forms with proxy objects i.e: I don't bind the form directly to my Entity, but to some other class that will passed to some service which will take the needed action based on the received data, i.e: the proxy class serves as a DTO to that service which I will call the Handler. Now considering the Handler doesn't have a dependency on the EntityManager, where should I do calls to EntityManager::persist() and EntityManager::flush()? I am usually comfortable with putting flush in the controller but I'm not so sure about persist since the controller shouldn't assume anything about what the Handler does, and maybe Handler::handle (the method that the form data is passed to) does more than just persist a new Entity to the database. One Idea is to create some interfaces to encapsulate flush and persist and pass them around, which will act as wrappers around EntityManager::flush() and EntityManager::persist(), but I'm not so sure about it since EntityManager::flush() might create unwanted consequences. So Maybe I should just create an interface around persist.
So My question is where and how to make the call to persist and flush, in order to get the most Solid code? Or am I just overcomplicating things in my quest of best practices?
If you have a service that will handle tasks upon your entities, to me, the right way is to inject EntityManager into your service definition and do persist and flush operation inside it.
Another way to proceed, if you want to keep separate that logic, is to create an EventSubscriber and raise a custom event from your "entity service" when you're ready to do persist and flush operations
My 2 cents:
about flush, as it calls the DB, doing it like you already do when needed in your controllers sounds good to me.
about presist, it should be called in your Handler when your entity is in a "ready to be flushed" state. A Persister interface with only the persist method as a dependency of your Handlers, and a DoctrinePersister implementation injected in them looks OK.
Another option here - you can implement save() method in your entity repository class and make persistence there. Inject your entity repository as dependency into your Handler class.
If you don't want to couple your service and business logic to the EntityManager (good job), SOLID provides a perfect solution to separate it from your database logic.
//This class is responsible for business logic.
//It knows nothing about databases
abstract class CancelOrder
{
//If you need something from the database in your business logic,
//create a function that returns the object you want.
//This gets implemented in the inherited class
abstract protected function getOrderStatusCancelled();
public function cancel($order)
{
$order->setOrderStatus($this->getOrderStatusCancelled());
$order->setSubmittedTime(new DateTime());
//and other business logic not involving database operations
}
}
//This class is responsible for database logic. You can create a new class for any related CRUD operations.
class CancelOrderManager extends CancelOrder
{
public function __construct($entityManager, $orderStatusRepository)...
public function getOrderStatusCancelled()
{
return $this->orderStatusRepository->findByCode('cancelled');
}
public function cancel($order)
{
parent::cancel($order);
$this->entityManager->flush();
}
}

Where should I instantiate the Entity Framework's ObjectContext in a 3-tier applicaiton

I have a 3-tier web application wit ha bunch of simple forms. One to list records, one to edit a single record, etc. The works.
I have a DataLayer where my EDMX is.
I have an App Layer where my POCOs are.
I haev a BusinessLayer with all my controller classes, etc. (not MVC!)
I have a UI layer where my web UI is.
The EDMX has many, many tables wit ha lot of navigation properties.
Of course, when I fetch the data in one of my controllers, e.g. GetCustomerById(int id), I create the Object context and close it when I'm done.
However, the ObjectContext is out of scope when I try to access the navigation properties in the UI layer.
Should I do (using MyContext = new MyContext()) {... } in the web layer?? that does not seem right.
Should I create another set of POCOs that I populate from the entities' data from the BizLayer?
What happens when I want to save data entered in a web form? Would I call a BizLayer controller e.g. SaveCustomer()?
My question is, how do you design the web UI layer if I want to be able to properly access the navigation properties of an entity?
Note:
EDMX is set to LazyLoading.
You want to use lazy loading in UI but it means that UI defines lifetime of your ObjectContext. There are many ways to achieve this without exposing the context to UI. You can for example use this simple approach:
You mentioned some controller which uses context and disposes it. So make your controller disposable and instead of disposing context in every method use single context for whole lifetime of the controller. Dispose the context in controller's Dispose method.
Instantiate your controller per request. So for example you can create instance of controller in Page.Load and dispose it in Page.Unload.
Use your controller and entities as you want. Whole processing of the request (between Load and Unload) will be in scope of single living context.
Anyway you should not need lazy loading in Web application too much. In your form you usually know exactly what entities you need so you should request them directly with eager loading.

In MVP, how is Data Model complexity dealt with and where to dynamically show/hide controls?

In most of the MVP examples I've seen, the presenter calls some service, which calls some repository, which returns an entity. In most asp.net web applications that I have worked on, the logic is never that simple. In my last project, my presenter called a presenter service layer that had to jump through hoops to get the data that was to be shown on the screen.
Details: The service layer queries a database for, let's say, 8 entity objects, some of which are nested within each other, then the code maps those entities to one huge object base off of an XSD. That xsd object was then passed to a 3rd party library to do something with it. After it returned the processed xsd obj, the code then had to parse through that xsd object, using a middle layer view formatter class to extract and build what I call the "View Model" (I've heard some call it a DTO). This View model was then returned from the service layer to the presenter and then databound to a repeater.
Where does the logic for show/hide controls go? Should that be a member in the DTO or should the presenter derive this value? (I chose to have it as a member in the view model)
Is it ok to have nested ViewModels(DTOs) or should other user controls be used to break down the complexity?
What is a good way to wire up a presenter with all of the Pages/UserControls that use it; meaning one presenter with 5 IViews that require the same instance of the presenter. Should user controls be self contained or should they rely on the "parent" IView(page) to give it the proper presenter?
Instead of having a view model, why not just use the Interface that the Page implements and pass that to the service layer (through the presenter) and let the service hydrate the IView? (Doing this would cause the service layer to have a reference to it, isn't that bad?).
public class ViewModel
{
public bool ShowHeight { get; set; }
//Is there a bettter way to do this?
public List<NestedViewModel> NestedViewModel { get { return _nestedViewModel; } }
}
IMO, the view should manage itself in showing and hiding; it is the view and is responsible for managing the UI behaviour.
I think complexity is OK as long as its not too overbearing; you can break it down into nested subpresenters/views if you need to.
Most MVP frameworks populate the presenter/view relationship from the view, especially since ASP.NET runs in the context of the page (the page is the HTTP handler processing the request, so it's what is alive at that point). The page, during init, goes and establishes the view/presenter relationship. Most examples do it this way. I built an MVP framework and have also established this approach.
You could; that's considered passive view, though the presenter should still do the work, and not directly pass to the service layer.
This is my opinion and there are many ways to do this.

Asp.net MVC RouteBase and IoC

I am creating a custom route by subclassing RouteBase. I have a dependency in there that I'd like to wire up with IoC. The method GetRouteData just takes HttpContext, but I want to add in my unit of work as well....somehow.
I am using StructureMap, but info on how you would do this with any IoC framework would be helpful.
Well, here is our solution. Many little details may be omitted but overall idea is here. This answer may be a kind of offtop to original question but it describes the general solution to the problem.
I'll try to explain the part that is responsible for plain custom HTML-pages that are created by users at runtime and therefore can't have their own Controller/Action. So the routes should be either somehow built at runtime or be "catch-all" with custom IRouteConstraint.
First of all, lets state some facts and requirements.
We have some data and some metadata about our pages stored in DB;
We don't want to generate a (hypothetically) whole million of routes for all of existing pages beforehand (i.e. on Application startup) because something can change during application and we don't want to tackle with pushing the changes to global RouteCollection;
So we do it this way:
1. PageController
Yes, special controller that is responsible for all our content pages. And there is the only action that is Display(int id) (actually we have a special ViewModel as param but I used an int id for simplicity.
The page with all its data is resolved by ID inside that Display() method. The method itself returns either ViewResult (strongly typed after PageViewModel) or NotFoundResult in case when page is not found.
2. Custom IRouteConstraint
We have to somewhere define if the URL user actually requested refers to one of our custom pages. For this we have a special IsPageConstraint that implements IRouteConstraint interface. In the Match() method of our constraint we just call our PageRepository to check whether there is a page that match our requested URL. We have our PageRepository injected by StructureMap. If we find the page then we add that "id" parameter (with the value) to the RouteData dictionary and it is automatically bound to PageController.Display(int id) by DefaultModelBinder.
But we need a RouteData parameter to check. Where we get that? Here comes...
3. Route mapping with "catch-all" parameter
Important note: this route is defined in the very end of route mappings list because it is very general, not specific. We check all our explicitly defined routes first and then check for a Page (that is easily changeable if needed).
We simply map our route like this:
routes.MapRoute("ContentPages",
"{*pagePath}",
new { controller = "Page", action = "Display" }
new { pagePath = new DependencyRouteConstraint<IsPageConstraint>() });
Stop! What is that DependencyRouteConstraint thing appeared in mapping? Well, thats what does the trick.
4. DependencyRouteConstraint<TConstraint> class
This is just another generic implementation of IRouteConstraint which takes the "real" IRouteConstraint (IsPageConstraint) and resolves it (the given TConstraint) only when Match() method called. It uses dependency injection so our IsPageConstraint instance has all actual dependencies injected!
Our DependencyRouteConstraint then just calls the dependentConstraint.Match() providing all the parameters thus just delegating actual "matching" to the "real" IRouteConstraint.
Note: this class actually has the dependency on ServiceLocator.
Summary
That way we have:
Our Route clear and clean;
The only class that has a dependency on Service Locator is DependencyRouteConstraint;
Any custom IRouteConstraint uses dependency injection whenever needed;
???
PROFIT!
Hope this helps.
So, the problem is:
Route must be defined beforehand, during Application startup
Route's responsibility is to map the incoming URL pattern to the right Controller/Action to perform some task on request. And visa versa - to generate links using that mapping data. Period. Everything else is "Single Responsibility Principle" violation which actually led to your problem.
But UoW dependencies (like NHibernate ISession, or EF ObjectContext) must be resolved at runtime.
And that is why I don't see the children of RouteBase class as a good place for some DB work dependency. It makes everything closely coupled and non-scalable. It is actually impossible to perform Dependency Injection.
From now (I guess there is some kind of already working system) you actually have just one more or less viable option that is:
To use Service Locator pattern: resolve your UoW instance right inside the GetRouteData method (use CommonServiceLocator backed by StructureMap IContainer). That is simple but not really nice thing because this way you get the dependency on static Service Locator itself in your Route.
With CSL you have to just call inside GetRouteData:
var uow = ServiceLocator.Current.GetService<IUnitOfWork>();
or with just StructureMap (without CSL facade):
var uow = ObjectFactory.GetInstance<IUnitOfWork>();
and you're done. Quick and dirty. And the keyword is "dirty" actually :)
Sure, there is much more flexible solution but it needs a few architectural changes. If you provide more details on exactly what data you get in your routes I can try to explain how we solved our Pages routing problem (using DI and custom IRouteConstraint).

Resources