Related
In one of my views, I have a ViewModel which I populate from two tables, and then bind a List<ViewModel> to an editable GridView (ASP.NET Web Forms).
Now I need to send that edited List<ViewModel> back to the Services layer to update it in the database.
My question is - is it Okay to send the ViewModel back to Services, or should it stay in the Presentation? If not - should I better use a DTO? Many thanks.
Nice question !
After several (hard) debates with my teammates + my experience with MVC applications, I would not recommend to pass viewmodel to your service / domain layer.
ViewModel belongs to presentation, no matter what.
Because viewModel can be a combination of different models (e.g : 1 viewModel built from 10 models), your service layer should only work with your domain entities.
Otherwise, your service layer will end up to be unusable because constrained by your viewModels which are specifics for one view.
Nice tools like https://github.com/AutoMapper/AutoMapper were made to make the mapping job.
I would not do it. My rule is: supply service methods with everything they need to do their job and nothing more.
Why?
Because it reduces coupling. More often than not service methods are addressed from several sources (consumers). It is much easier for a consumer to fulfil a simple method signature than having to build a relatively complex object like a view model that it otherwise may have nothing to do with. It may even need a reference to an assembly it wouldn't need otherwise.
It greatly reduces maintenance effort. I think an average developer spends more than 50% of his time inspecting and tracking existing code (maybe even much more). Now everybody knows that looking for something that is not there takes disproportionally much time: you must have been everywhere to be sure. If a method receives arguments (or object with properties) that are not used directly or further down the call stack you or others will walk this long road time and again.
So if there is anything in the view model that does not play a part in the service method, don't use it to call the method.
Yes. I am pretty sure it is ok.
Try to use MS Entity Framework and it will help you allots.
Most MVC tutorials I've been reading seem to create 4 View objects for each Model. For example, if my Model is "Foo", there seem to be 4 .cshtml files: Foo/Create, Foo/Delete, Foo/Details, and Foo/Edit. Using the VisualStudio "scaffolding" helper does this as well.
Is this really considered MVC best-practice? It just feels wrong to have 4 classes that are 80-90% identical to each other. When I add a new field to Foo, I need to edit all 4 .cshtml files. This sort of dual-maintenance (quad-maintenance?) just makes my OO skin crawl.
Please tell me: is there an expected/accepted best-practice which handles this differently? Or, if this really IS accepted best-practice, tell me why the quad-maintance shouldn't make me squirm.
I'm a reasonably skilled veteran of ASP.NET / c# / OO Design, but pretty new to MVC; so apologies if this is a noob question. Thanks in advance for your help!
Edit: thanks for all the replies! I marked the most thorough one as the answer, but upvoted all that were helpful.
You'll probably need between two and four different views:
List (for viewing many things)
View (for viewing a single thing. Might not be necessary, if it's OK to use Edit as View, or if List has room to show all properties)
Create
Edit (can be the same as create, if you code cleverly)
Thus, if your model doesn't have too many properties to show them all in a table, and if you're OK with not having a static, non-editable view for just examining, you can get well away with just List and Edit, and scrap the other two.
However, this doesn't solve your problem of double (or triple) maintenance if you update your model. There's other magic for that ;)
In ASP.NET MVC 3, there are extensions on HtmlHelper that let you do Html.DisplayForModel() and Html.EditorForModel(). These use predefined templates to nest themselves into your object and draw up display/edit fields for all public properites. If you pass DisplayForModel an IEnumerable<Foo>, it will create a table with column headers that are the property names of Foo (using the DisplayName attribute information if you supplied it) and where each row represent one Foo instance. If you give EditorForModel a Foo, it will create a <label> and an <input> for each public property on Foo.
All of the templates used by these powerful extension methods can be replaced by you, if you're not happy with the defaults. This can be done either on the level of Foo, in which case you'd be back in your double-maintenance scenario, or on lower levels (such as string or DateTime) to affect all editor/display fields generated with the templates.
For more information on exactly how this works, google "ASP.NET MVC 3 editor templates" and read a couple of tutorials. They'll explain the details much better than I could.
The views that ASP.NET MVC create for you don't necessarily need to be the views that you use in production. I found those just to be handy while developing quick prototypes or to test the database CRUD operations. Feel free to create whatever view(s) you would like to handle the operations.
I would generally just have 1 or 2 views to handle the basic operations and not use the built in views that are generated. For example, 1 view for adding, editing, or details and 1 view to show a list of objects.
It all depends on your application.
If you have a single item, you don't need a List view. If you can't edit it, then you don't need an edit view. Create and Edit can often be the same view, unless there are special things you need to do in one, but not the other.
In other words, use as many views as you need. There's no hard and fast rule here. The scaffolding is just there to help you on your way. Many kinds of apps will work just fine using the scaffolding, and won't require advanced HTML or Javascript.
Why would you want multiple views? Well, let's take the display and edit functions. You could create one view, in which you use if statements to determine the edit mode of the view, however this will complicated the view logic and views should be as simple as possible.
The reason to create seperate views is that its easier to maintain than one gigantic view with tons of conditional logic in it.
You can use exactly the same view when you are performing [HttpGet]. Given that you pass a proper ViewModel to this view, it will populate with appropriate data every time whether you are loading create, update, or delete Action.
The problem becomes apparent when you try to post that data to a specific Action.
Naturally View should have only one form, which will be used for posting data. When you declare this form, you specify which exactly Action to use for Post.
Having 3 different Submit buttons in that form will not make a difference since all of them will post the same form to the same Action.
You could do some javascript tweaking on OnClick event for these buttons to change Action to which data is posted, but this definitively would not be best practice.
Buttom line: having 4 different views for each of the CRUD actions is the best practice for MVC.
I tend to create the following for an object's CRUD ops:
index
_form (partial)
new
update
delete
view
As the same form is shared between new and update, there is very little difference between the two. It really depends on how much you want the variation to be, honestly.
As for delete, this is optional. I like to have a view in case javascript is disabled.
edit:
You mention view models and the guy above posted a long, convoluted (no offense) VM code sample.
Personally, I hate classes written to basically mirror domain objects and are only used to "move" data. I hate VMs. I hate DTOs. I hate everything that makes me have to write more code than is necessary.
I guess I've drank the coolaid of other frameworks (rails, sinatra, node.js) to the point where I can't stomach the idea of tossing DRY to the wind.
I personally say skip um.
Edit2 I forgot list..
So, I need some input refactoring an asp.net (c#) application that is basically a framework for creating dynamic forms (any forms). From a high level point of view, there is a table that has the forms, and then there is a table that has all the form fields, where it is one to many between the two. There is a validation table, where each field can have multiple types of validation, and it is a one to many from the form fields table to the validation table.
So the issue is that this application has been sold as the be-all-end-all customizable solution to all the clients. So, the idea is whatever form they want, we can build it jsut using DB configurations. The thing is, that is not always possible, because there is complex relationship between the fields, and complex relationship between the forms themselves. Also, there is only once codebase, and this is for multiple clients - all of whom host it on their own. There is very specific logic for each of the clients, and they are ALL in the same codebase, with no real separation. Sometimes it was too difficult to make it generic, so there are instances where it has hard coded logic (as in if formID = XXX then do _). You can also have nested forms, as in, one set of fields on its own within each form.
So usually, when one client requests a change, we make the change and deploy it to that client - but then another client requests a different change, and we make the change and deploy it for THAT client, but the change from the earlier client breaks it, and its a headache trying to debug, because EVERYTHING is dynamic. There is no way we can rollback the earlier change, because then the other client would be screwed.
Its not done in a real 3-tier architecture - its a web site with references to a DB class, and a class library. There is business logic in the web site itself, in the class library, and the database stored procs (Validation is done in the stored procs).
I've been put in charge of re-organizing the whole thing, and these are my thoughts/questions:
I think this is a bad model in general, because one of the things I heard one of the developers say is that anytime any client makes a change, we should deploy to everybody - but that is not realistic, if we have say 20 clients - there will need to be regression testing on EVERYTHING, since we don't know the impact...
There are about 100 forms in total, and their is some similarity in them (not much). But I think the idea that a dynamic engine can solve ALL form requests was not realistic as well. Clients come up with the most weird requests. For example, they have this engine doing a regular data entry form AND a search form.
There is a lot of preserving state between pages, and it is all done using session variables, which is ok, except that it is not really tracked, and so sessions from the same user keep getting overwritten, and I think sessions should be got rid of.
Should I really just rewrite the whole thing? This app is about 3 years old, and there has been lots of testing and things done, and serious business logic implemented, so I hate to get rid of all that (joel's advice). But its really a mess of a sphagetti code, and everything takes forever to do, and things break all the time because of minor changes.
I've been reading Martin Fowlers "Refactoring" and Michael Feathers "working effectively with legacy code" - and they are good, but I feel they were written for an application that was 'slightly' better architected, where it is still a 3-tiered architecture, and there is 'some' resemblance of logic..
Thoughts/input anyone?
Oh, and "Help!"
My current project sounds like almost exactly the same product you're describing. Fortunately, I learned most of my hardest lessons on a former product, and so I was able to start my current project with a clean slate. You should probably read through my answer to this question, which describes my experiences, and the lessons I learned.
The main thing to focus on is the idea that you are building a product. If you can't find a way to implement a particular feature using your current product feature set, you need to spend some additional time thinking about how you could turn this custom one-off feature into a configurable feature that can benefit all (or at least many) of your clients.
So:
If you're referring to the model of being able to create a fully customizable form that makes client-specific code almost unnecessary, that model is perfectly valid and I have a maintainable working product with real, paying clients that can prove it. Regression testing is performed on specific features and configuration combinations, rather than a specific client implementation. The key pieces that make this possible are:
An administrative interface that is effective at disallowing problematic combinations of configuration options.
A rules engine that allows certain actions in the system to invoke customizable triggers and cause other actions to happen.
An Integration framework that allows data to be pulled from a variety of sources and pushed to a variety of sources in a configurable manner.
The option to inject custom code as a plugin when absolutely necessary.
Yes, clients come up with weird requests. It's usually worthwhile to suggest alternative solutions that will still solve the client's problem while still allowing your product to be robust and configurable for other clients. Sometimes you just have to push back. Other times you'll have to do what they say, but use wise architectural practices to minimize the impact this could have on other client code.
Minimize use of the session to track state. Each page should have enough information on it to track the current page's state. Information that needs to persist even if the user clicks "Back" and starts doing something else should be stored in a database. I have found it useful, however, to keep a sort of breadcrumb tree on the session, to track how users got to a specific place and where to take them back to when they finish. But the ID of the node they're actually on currently needs to be persisted on a page-by-page basis, and sent back with each request, so weird things don't happen when the user is browsing to different pages in different tabs.
Use incremental refactoring. You may end up re-writing the whole thing twice by the time you're done, or you may never really "finish" the refactoring. But in the meantime, everything will still work, and you'll have new features every so often. As a rule, rewriting the whole thing will take you several times as long as you think it will, so don't try to take the whole thing in a single bite.
I have a number of similar apps for building dynamic forms that I support.
There's a whole lot of things you could/could not do & you're right to think hard before throwing away 3 years of testing/development.
My input for you to consider is to implement a plug-in architecture on top of what you're got. Any custom code for a form goes in the plug-in & the name of this plug-in is stored with the form. When you generate a form, the correct plug-in is called to enhance the base functionality. that way you get to move all the custom code out of the existing library. It should also mean less breaking changes, each plug-in only affects the form it's attached to.
From that point it'll be easy to refactor the core engine as it's common functionality across all clients & forms.
Since your application seems to have become a big ball of mud, a complete (or an almost complete rewrite) might make sense.
You should also take into account new technologies like document-oriented databases (couchDB, MongoDB)
Most of the form definitions could probably fit pretty well in document-oriented databases. For exemple:
To define a customer form, you could use a document that looks like:
{Type:"FormDefinition",
EntityType: "Customer",
Fields: [
{FieldName:"CustomerName",
FieldType:"String",
Validations:[
{ValidationType:"Required"},
{ValidationType:"StringLength", Minimum:15, Maximum:50},
]},
...
{FieldName:"CustomerType",
FieldType:"Dropdown",
PossibleValues: ["Standard", "Valued", "Gold"],
DefaultValue: ["Standard"]
Validations:[
{ValidationType:"Required"},
{
ValidationType:"Custom",
ValidationClass:"MySystem.CustomerName.CustomValidations.CustomerStatus"
}
]},
...
]
};
With this kind of document to define your forms, you could easily add forms and validations which are customer specific.
You could easily add subforms using a fieldtype of SubForm or whatever.
You could define FieldTypes for all common types of fields like e-mail, phone numbers, address, etc.
namespace System.CustomerName.CustomValidations {
class CustomerStatus: IValidator {
private FormContext form;
private List<ValidationErrors> validationErrors;
CustomerStatus(FormContext fc) {
this.validationErrors = new List<ValidationErrors>();
this.form = fc;
}
public List<ValidationErrors> Validate() {
if (this.formContext.Fields["CustomerType"] == "Gold" && Int.Parse(this.form.Fields["OrderCount"]) < 10) {
this.validationErrors.Add(new ValidationError("A gold customer must have at least 10 orders"))
}
if (this.formContext.Fields["CustomerType"] == "Valued" && Int.Parse(this.form.Fields["OrderCount"]) < 5) {
this.validationErrors.Add(new ValidationError("A valued customer must have at least 5 orders"))
}
return this.validationErrors;
}
}
}
A record of a document with that definition could look like this:
{Type:"Record",
EntityType: "Customer",
Fields: [
{FieldName:"CustomerName", Value:"ABC Corp.",
{FieldName:"CustomerType", Value:"Gold",
...
]
};
Sure, this solution is a lot of work, but if/when realized it could be really easy to create/update/customize forms.
This is a common but (IMO) somewhat naive design approach. "Instead of solving the customer's problem, let's build a tool to let them solve their own problems!". But the reality is, that generally customers want YOU to solve their ACTUAL problems. So build things that solve their problems.
If you can architect it in a way that allows you to reuse some parts for different customers, fine. But that is generally what the frameworks have done for you already - work out the common features that applications need and make them available in neat packages.
I am working on a design spec for a new application that will be heavily workflow driven.
Before I re-invent the wheel, is there a decent lightweight workflow engine that plugs into ASP.NET already around?
Basically, I'm looking for something that handles moving through a defined set of workflow pages while handling state management automatically.
If this isn't around already, I'll definitely try to abstract the engine from my app and put it on codeplex, as it would be really handy.
Any suggestions?
Note: .NET 2.0, so no WWF, though I think WWF is overkill for my needs.
EDIT: Seems like there is a legitimate need for this, and there isn't a product out there...So I might build this.
Here is what I'm picturing:
Custom Page class called WebFlowPage
All WebFlowPage's are registered in a Workflow mapper.
Each WebFlowPage has some form of state object.
A HttpHandler handles picking the appropriate WebFlowPage based upon the workflow, and populating it from the state object.
Is the workflow dynamic, or static?
If the workflows are simple, you could roll your own workflow engine.
In certain situations, it can be fairly simple, and just a couple of data tables to handle the rules, processing and state.
Alot of workflow engines are built for large scale processing (credit card applications, for example). For small scale, you should at least consider your own, which would eliminate the overhead and dependency of/on an engine.
Not sure exactly what you wish to do here, but Ra-Ajax can easily keep state at least if you want your solution ajaxified...
For reference purposes you might want to check out the Ajax Calendar sample or even the (banalistically implemented) Ajax Wizard sample. It surely beats the hell out of doing it with JavaScript...
And every time you "do something" you're in "server-land" which means you can store temporaries all the time as you wish...
The project is LGPL
(PS!
Yes I do work with it)
Building a custom workflow engine is not trivial, although it may seem simple at first. We've tried that. It depends a lot on the complexity of the logic you need it to cover.
Given the current state of the Windows Workflow Foundation and the lack of another framework that abstracts the workflow concepts, I would choose WF if you need complex logic, asynchronous handling or branches in your workflows.
Tracking your state through the workflow can be accomplished by carrying some kind of xml payload or storing the state in a database,
If your workflow is actually a sequential set of forms that need to be filled in by the user, tracking the steps and guiding the user to the next step can be accomplished with some simple custom solution.
You could take a look at the InRule engine too.
Also, there is nxBRE.
These too are mostly used for business rules.
InRule is proprietary, whereas nxBRE supports RuleML (the defacto standard).
You might need to make your own implementation for the pages, and use the rule engine as the "structure".
At this moment, I know that Sharepoint 2007 supports page workflows (using WF), but this would imply using .NET Framework 3 and deployng sharepoint.
My suggestion would be to use whatever you find more light and easier to use.
I think the term "workflow" is very open to interpretation. I have been working lately with a type of workflow that is very different from what you seem to be describing. Mine is a state machine based workflow where the state of a particular record determines what actions a user can take to move the record to the next step in the business process. So "workflow" in this instance means how the record flows from one state to another until it is finally completed.
Your usage of workflow seems to have more to do with moving a user from one page to another in a linear multi-step process, which is a completely different use case (correct me if I'm wrong). So before coming up with a general purpose "workflow" engine that anyone could use, I would recommend defining a little bit better exactly what types of situations this system would handle.
I've been using this for a few months http://objectflow.codeplex.com. Not asp specific but it may fit your needs
While browsing the web for some workflow & BPM resources, I found the following project: NetBPM. Unfortunately, the project seems to be stopped.
I don't think there is a workflow engine that will automatically handle state for you, but if you are moving through a set of pages like a process such as checkout on an ecommerce site, perhaps the ASP.NET wizard control could help you?
There are few workflow options. "Aspose" and "Skelta" are the offers I´m evaluating.
Fábio
you can use WorkFlow Engine, just read the document and run the Demo.
all of the features you need for a dynamic workflow engine they added in there.
I keep hearing about the DRY Principle and how it is so important in ASP.NET MVC, but when I do research on Google I don't seem to quite understand exactly how it applies to MVC.
From what I've read its not really the copy & paste code smell, which I thought it was, but it is more than that.
Can any of you give some insight into how I might use the DRY Principle in my ASP.NET MVC application?
DRY just means "Don't Repeat Yourself". Make sure that when you write code, you only write it one time. If you find yourself writing similar functionality in all of your Controller classes, make a base controller class that has the functionality and then inherit from it, or move the functionality into another class and call it from there instead of repeating it in all the controllers.
use filter attributes to manage aspects (authentication, navigation, breadcrumbs, etc)
use a layer supertype controller (apply common controller-level filters to it, see mvccontrib for an example)
write custom actionresults (like in mvccontrib - for example we made one called logoutresult that just does a FormsAuthentication.Logout()
use a convention for view names
most importantly - keep you controller actions dumb, look for reuse opportunities in services
Don't Repeat Yourself. It can apply to many different aspects of programming. The most basic level of this is prevent code smell. I haven't used ASP.NET so I can't get specific to it and MVC's.
In C++ Templating prevets multiple copies of the same function.
In C void * pointers can be used in a similar fashion, but with great care.
Inheriting from another function allows function allows other functions to use the same code base without having to copy the code.
Normalizing data in a database minimizes redundant data. Also adhereing to the DRY principle.
When you go over a "thought" in a project. Ask yourself.
Have I already wrote this code?
Will this code be useful elsewhere.
Can I save coding by building off of a previous class/function.
DRY is not specific to any one technology. Just make sure you look at your classes from a functionality standpoint (not even from a copy/paste coder view) and see where the duplication occurs. This process will probably not happen in one sitting, and you will only notice duplication after reviewing your code several months later when adding a new feature. If you have unit tests, you should have no fear in removing that duplication.
One advantage of MVC as related to not repeating yourself is that your controller can do tasks common to all pages in the one class. For example, validating against certain types of malicious requests or validating authentication can be centralized.
DRY should not only be applied to code, but to information in general. Are you repeating things in your build system? Do you have data which should be moved to a common configuration file, etc.
Well, the most common example that I can give about DRY and UI is using things like MasterPages and UserControls.
MasterPages ensure that you have written all the static HTML only once.
UserControls ensure reusability of code. Example, you will have a lot of forms doing basic stuff like CRUD. Now, ideally we want all users to see different pages for Create and Update though the forms fields in both will almost be the same. What we can do is combine all the common controls and put them into a control that can be reused over both the pages. This ensures that we are never retyping (or copy-pasting) the same code.
DRY is especially important in MVC because of the increase in the sheer number of files to accomplish the same task.
There seems to be a misconception that everything in a domain model has to be copied up as a special view model. You can have domain models be domain models but view models be something that know nothing of domain specifics and be more generic. For example:
Domain Model classes: Account, Asset, PurchaseOrder
View Model: List, Table, Tuple, SearchFormBackingModel:Checked options, Outputoptions, etc. The view itself might be much more view implementation specific.
The Tuple/Dictonary/Map might map to Account, Asset and PurchaseOrder single instances but a Table might be useful for a collection of them etc. You still have MVC but you have session data, not ready for transaction yet in a view model without necessarily having it violate the rules of your domain model which is where the rules should go. They will be less anemic and anti-pattern that way. You can pass these rules up front and use them there or just in back or both depending on how the system reads from clients etc.