I'm about to start a new project and I'd like to use Single Page Application technique. Since I'll be using ASP.NET I think the easiest way will be using Angular, which I'm new with.
Anyway, what scares me the most about Angular (or any other JS/TS tech), is since I don't have much time, I can't afford to rewrite all the models/entities to another language. The code and maintenance cost of this is too high for me.
tl;dr
So my question is, is there a way to have Angular use the original model/entity names so I can use them in the page without the need to rewrite any unnecessary code?
Will the .NET attributes take action?
I guess your concern is, that your business object world (entity model) needs to be reflected in your client/angular app as models (javascript objects) ?? The need for them comes also from typing errors you get in angular 2.
Creating and maintaining a transparent model world spanning server and client part is way too much effort for real world applications, although it would be nice.
I decided to use and receive the model as a result from a remote call via AJAX/WebAPI and work with these "models" in my client applications. The result then reflects your business model (entities) you have probably defined already.
this.dataService.getRecords('MT_MyEntity')
.subscribe((data: any[]) => {
var response: any = data; // Do this to avoid typing errors
var resprecords: any = response.items;
// Here you get entities;
// Deal with your business objects fetched from remote system; Use it to show in forms, ....
},
error => {
// your error handling
});
In your application you can use the entity and attributes names you have defined in your server side model (take care of upper/lower case modifications)
For me this is a pragmatic way to deal with that and it works very well.
For any decent size application the benefits of creating client side model far outweigh the effort required to create and maintain them.
This effect is more pronounce with TypeScript as it allows compile time checking of the contracts. As we move more and more code to client side and use frameworks like Angular, having a clearly defined model helps us understand what is happening. We derive the same benefits that we derive when type checking is available on server.
Having a separate client side model also allows us to adapt such model to client side UI needs (albeit sometime we create viewmodel to satisfy such requirements)
The approach of generating these client side contracts, as highlighted by #Ivan can help us to reduce the overall effort.
Related
I'm using Prism Forms 7.x in a Xamarin Forms app. Up to now, I was using the INavigatedAware interface in view models to check if a navigation to or from the respective view model happened. Now, I saw that there is INavigatingAware, which only provides the OnNavigatingTo method (so, the navigation is not yet finished).
My questions regarding INavigatingAware.OnNavigatingTo:
- Can I use INavigatingAware where I'm not interested in the OnNavigatedFrom call?
- Is it better in terms of performance to load data within OnNavigatingTo (before the BindingContext is set; so that e.g. the data binding don't need to be updated twice)?
Would be nice to share your experiences and best practices regarding those two interfaces.
INavigatingAware.OnNavigatingTo was first introduced to Prism to assist developers perform initialization logic similar to the ViewWillAppear.
To help better visualize this the events look something like this within the NavigationService:
Create the Page
Set the ViewModelLocator.Autowire property if it is null
Apply any behaviors from the PageBehaviorFactory
Call IConfirmNavigation.CanNavigate (and it's async counterpart) on the Page/ViewModel we're navigating away from
Call INavigatingAware.OnNavigatingTo
Push the page onto the NavigationStack
Call INavigatedAware.OnNavigated{From|To}
BREAKING CHANGE
Now, all of that said, we have had a tremendous amount of feedback on INavigatingAware (the essence of this very question), as a result over the overwhelming feedback from the Prism community INavigatingAware has been a hard obsolete in Prism 7.2. This means that it was removed from INavigationAware, and will throw a compile time error if you directly implement it. For those times where you got it for free from INavigationAware, it simply will not be called. Moving forward, we have introduced a series of interfaces to make this easier and more self documenting as to the intent.
New Interfaces & API
IInitialize.Initialize
IInitializeAsync.InitializeAsync
The new IInitialize interface is the direct replacement for INavigatingAware. We have long gotten feedback that people would like the ability to perform async tasks during initialization. The issue here is that this can cause a very noticeable delay in Navigation similar to IConfirmNavigationAsync. If you use either of those async interfaces, you will need to be sure to include some sort of busy/loading overlay on your screen.
My question might seem strange for pros but please take to account that I am coming from ruby on rails world =)
So, I am learning ASP.NET Core. And I like what I am seeing in it compared to rails. But there is always that but... Let me describe the theoretical problem.
Let's say I have a Product model. And there are over 9000 records in the database. It is obvious that I have to paginate them. I've read this article, but it seems to me that something is wrong here since the controller shouldn't use context directly. It has to use some repository (but that example might be provided in such a way only for simplicity).
So my question is: who should be responsible for pagination? Should it be the controller which will receive some queryable object from the repository and take only those records it needs? Or should it be my own business service which does the same? Or should the repository has a method like public IEnumerable<Product> ListProducts(int offset, int page)?
One Domain-Driven-Design solution to this problem is to use a Specification. The Specification design pattern describes a query in an object. So you might create a PagedProduct specification which would take in any necessary parameters (pageSize, pageNumber, filter). Then one of your repository methods (usually a List() overload) would accept an ISpecification and would be able to produce the expected result given the specification. There are several benefits to this approach. The specification has a name (as opposed to just a bunch of LINQ) that you can reason about and discuss. It can be unit tested in isolation to ensure correctness. And it can easily be reused if you need the same behavior (say on an MVC View action and a Web API action).
I cover the Specification pattern in the Pluralsight Design Patterns Library.
For first, I would like to remind you that all such examples you linked are overly simplified, so it shouldn't drive you to believe that that is the correct way. Simple things, with fewer abstraction layers are easier to oversee and understand (at least in the case of simple examples for beginners when the reader may not know where to look for what) and that's why they are presented like that.
Regarding the question: I would say none of the above. If I had to decide between them then I would say the service and/or the repository, but that depends on how you define your storage layer, etc.
"None of the above", then what? My preference is to implement an intermediary layer between the service layer and the Web UI layer. The service layer exposes manipulation functionality but for read operations, exposes the whole collection as an IQueryable, and not as an IEnumerable, so that you can utilize LINQ-to-whatever-storage.
Why am I doing this, many may ask. Because almost all the time you will use specialized viewmodels. To display the list of products on an admin page, for example, you would need to display values of columns in the products table, but you are very likely to need to display its category as well. Very rarely is it the case that you need data only from one table and by exposing the items as an IQueryable<T> you get the benefit of being able to do Selects like this:
public IEnumerable<ProductAdminTableViewModel> GetProducts(int page, int pageSize)
{
backingQueryable.Select(prod => new ProductAdminTableViewModel
{
Id = prod.Id,
Category = prod.Category.Name, // your provider will likely resolve this to a Join
Name = prod.Name
}).Skip((page - 1) * pageSize).Take(pageSize).ToList();
}
As commented, by using the backing store as an IQueryable you will be able to do projections before your query hits the DB and thus you can avoid any nasty Select N+1s.
The reason that this sits in an intermediary layer is simply you do not want to add references to your web project neither in your repo nor in your service layer (project) but because of this you cannot implement the viewmodel-specific queries in your service layer simply because the viewmodels cannot be resolved there. This implies that the viewmodels reside in this same project as well, and to this end, the MVC project only contains views, controllers and the ASP.NET MVC-related guttings of your app. I usually call this intermediate layer as 'SolutionName.Web.Core' and it references the service layer to be able to access the IQueryable<T>-returning method.
In one of my views, I have a ViewModel which I populate from two tables, and then bind a List<ViewModel> to an editable GridView (ASP.NET Web Forms).
Now I need to send that edited List<ViewModel> back to the Services layer to update it in the database.
My question is - is it Okay to send the ViewModel back to Services, or should it stay in the Presentation? If not - should I better use a DTO? Many thanks.
Nice question !
After several (hard) debates with my teammates + my experience with MVC applications, I would not recommend to pass viewmodel to your service / domain layer.
ViewModel belongs to presentation, no matter what.
Because viewModel can be a combination of different models (e.g : 1 viewModel built from 10 models), your service layer should only work with your domain entities.
Otherwise, your service layer will end up to be unusable because constrained by your viewModels which are specifics for one view.
Nice tools like https://github.com/AutoMapper/AutoMapper were made to make the mapping job.
I would not do it. My rule is: supply service methods with everything they need to do their job and nothing more.
Why?
Because it reduces coupling. More often than not service methods are addressed from several sources (consumers). It is much easier for a consumer to fulfil a simple method signature than having to build a relatively complex object like a view model that it otherwise may have nothing to do with. It may even need a reference to an assembly it wouldn't need otherwise.
It greatly reduces maintenance effort. I think an average developer spends more than 50% of his time inspecting and tracking existing code (maybe even much more). Now everybody knows that looking for something that is not there takes disproportionally much time: you must have been everywhere to be sure. If a method receives arguments (or object with properties) that are not used directly or further down the call stack you or others will walk this long road time and again.
So if there is anything in the view model that does not play a part in the service method, don't use it to call the method.
Yes. I am pretty sure it is ok.
Try to use MS Entity Framework and it will help you allots.
I've spent a lot of time working in Django, and have grokked the framework well enough that I have started replacing the original components (view engine, etc.) with my own custom components and the sky hasn't fallen down.
I've been looking at ASP.NET MVC, and been quite interested (I really like C#/F#) but so far have learned... just about nothing. I've been digging through http://www.asp.net/mvc/mvc4 without much success. I suppose my main questions would be:
What are the main moving parts in a typical workflow? Let's say a request comes in. Who takes it, does stuff, and passes it on to who else? In Django, for example, a request goes through the URL Mapper, Middleware, goes to a controller, which may dig through some models (via explicit function calls) and get some data, pass it into a template (also via an explicit function call) to be rendered and pass it back.
What kind of client-server coupling is there? For example, in many frameworks there is a explicit coupling of each HTML-form with a serverside-validator, with a model, with a database table, such that client side validation code is automatically generated. Another example is Quora's Livenode, which explicitly links client-side HTML components with their dependencies in the model, allowing changes in the database to propagate and automagically update client-side code.
I think there is no better answer to your first question than ASP.NET MVC Pipeline :
http://www.simple-talk.com/content/file.ashx?file=6068
explained in more detail here :
http://www.simple-talk.com/dotnet/.net-framework/an-introduction-to-asp.net-mvc-extensibility/
To your second question : answer is none. ASP.NET application dont even have to render HTML output, you can write your own viewengine to give any representation of the data, not consumed by browser, but any http (REST) capable device. The only things you can consider as coupling "conventions" (for model binding for example), but they can be replaced and extended in any way you like.
What kind of client-server coupling is there?
As rouen said, none.
I am not familiar with Django, but unlike other MVC frameworks (including Rails) ASP.NET MVC is very skinny in that it only implements Views and Controllers of the traditional MVC pattern. You are pretty much on your own for the model part. That means there is no built-in support for database creation, ORM, et cetera.
ASP.NET MVC does implement a bunch of plumbing to route requests to the appropriate controllers and even some binding of parameters (e.g. query string parameters, form values) when instantiating controllers but this binding is not to a full blown model. The binding in this context is usually either single values or "viewModels"
ASP.NET MVC also implements the plumbing to select the right view to render.
I am currently working on several flex projects, that have gone in a relative short amount of time from prototype to almost quite large applications.
Time has come for some refactoring to be done, so obviously the mvc principle came into mind.
For reasons out my control a framework(i.e. robotlegs etc.) is not an option.
Here comes the question...what general guidelines should I take into consideration when designing the architecture?
Also...say for example that I have the following: View, Ctrl, Model.
From View:
var ctrlInstance:Ctrl= new Ctrl();
ctrl.performControllerMethod();
In Controller
public function performControllerMethod():void{
//dome some sort of processing and store the result in the model.
Model.instance.result = method_scope_result;
}
and based on the stored values update the view.
As far as storing values in the model, that will be later used dynamically in the application, via time filtering or other operation, everything is clear, but in cases where data just needs to go in(say a tree that gets populated once at loading time), is this really necessary to use the view->controller->model->view update scheme, or can I just make the controller implement IEventDispatcher and dispatch some custom events, that hold necessary data, after the controller operation finished.
Ex:
View:
var ctrlInstance:Ctrl= new Ctrl();
ctrl.addEventListener(CustomEv.HAPPY_END, onHappyEnd);
ctrl.addEventListener(CustomEv.SAD_END, onSadEnd);
ctrl.performControllerMethod();
Controller
public function performControllerMethod():void{
(processOk) ? dispatchEvent(new CustomEv(CustomEv.HAPPY_END, theData)) : dispatchEvent(new CustomEv(CustomEv.SAD_END));
}
When one of the event handlers kicks into action do a cleanup of the event listeners(via event.currentTarget).
As I realize that this might not be a question, but rather a discussion, I would love to get your opinions.
Thanks.
IMO, this whole question is framed in a way that misses the point of MVC, which is to avoid coupling between the model, view, and controller tiers. The View should know nothing of any other tier, because as soon as it starts having references to other parts of the architecture, you can't reuse it.
Static variables that are not constant are basically just asking for trouble http://misko.hevery.com/2009/07/31/how-to-think-about-oo/. Some people believe you can mitigate this by making sure that you only access these globals by using a Controller, but as soon as all Views have a Controller, you have a situation where the static Model can be changed literally from anywhere.
If you want to use the principles of a framework without using a particular framework, check out http://www.developria.com/2010/04/combining-the-timeline-with-oo.html and http://www.developria.com/2010/05/pass-the-eventdispatcher-pleas.html . But keep in mind that established frameworks have solved most of the issues you will encounter, you're probably better off just using one.