I'm using Prism Forms 7.x in a Xamarin Forms app. Up to now, I was using the INavigatedAware interface in view models to check if a navigation to or from the respective view model happened. Now, I saw that there is INavigatingAware, which only provides the OnNavigatingTo method (so, the navigation is not yet finished).
My questions regarding INavigatingAware.OnNavigatingTo:
- Can I use INavigatingAware where I'm not interested in the OnNavigatedFrom call?
- Is it better in terms of performance to load data within OnNavigatingTo (before the BindingContext is set; so that e.g. the data binding don't need to be updated twice)?
Would be nice to share your experiences and best practices regarding those two interfaces.
INavigatingAware.OnNavigatingTo was first introduced to Prism to assist developers perform initialization logic similar to the ViewWillAppear.
To help better visualize this the events look something like this within the NavigationService:
Create the Page
Set the ViewModelLocator.Autowire property if it is null
Apply any behaviors from the PageBehaviorFactory
Call IConfirmNavigation.CanNavigate (and it's async counterpart) on the Page/ViewModel we're navigating away from
Call INavigatingAware.OnNavigatingTo
Push the page onto the NavigationStack
Call INavigatedAware.OnNavigated{From|To}
BREAKING CHANGE
Now, all of that said, we have had a tremendous amount of feedback on INavigatingAware (the essence of this very question), as a result over the overwhelming feedback from the Prism community INavigatingAware has been a hard obsolete in Prism 7.2. This means that it was removed from INavigationAware, and will throw a compile time error if you directly implement it. For those times where you got it for free from INavigationAware, it simply will not be called. Moving forward, we have introduced a series of interfaces to make this easier and more self documenting as to the intent.
New Interfaces & API
IInitialize.Initialize
IInitializeAsync.InitializeAsync
The new IInitialize interface is the direct replacement for INavigatingAware. We have long gotten feedback that people would like the ability to perform async tasks during initialization. The issue here is that this can cause a very noticeable delay in Navigation similar to IConfirmNavigationAsync. If you use either of those async interfaces, you will need to be sure to include some sort of busy/loading overlay on your screen.
Related
I'm about to start a new project and I'd like to use Single Page Application technique. Since I'll be using ASP.NET I think the easiest way will be using Angular, which I'm new with.
Anyway, what scares me the most about Angular (or any other JS/TS tech), is since I don't have much time, I can't afford to rewrite all the models/entities to another language. The code and maintenance cost of this is too high for me.
tl;dr
So my question is, is there a way to have Angular use the original model/entity names so I can use them in the page without the need to rewrite any unnecessary code?
Will the .NET attributes take action?
I guess your concern is, that your business object world (entity model) needs to be reflected in your client/angular app as models (javascript objects) ?? The need for them comes also from typing errors you get in angular 2.
Creating and maintaining a transparent model world spanning server and client part is way too much effort for real world applications, although it would be nice.
I decided to use and receive the model as a result from a remote call via AJAX/WebAPI and work with these "models" in my client applications. The result then reflects your business model (entities) you have probably defined already.
this.dataService.getRecords('MT_MyEntity')
.subscribe((data: any[]) => {
var response: any = data; // Do this to avoid typing errors
var resprecords: any = response.items;
// Here you get entities;
// Deal with your business objects fetched from remote system; Use it to show in forms, ....
},
error => {
// your error handling
});
In your application you can use the entity and attributes names you have defined in your server side model (take care of upper/lower case modifications)
For me this is a pragmatic way to deal with that and it works very well.
For any decent size application the benefits of creating client side model far outweigh the effort required to create and maintain them.
This effect is more pronounce with TypeScript as it allows compile time checking of the contracts. As we move more and more code to client side and use frameworks like Angular, having a clearly defined model helps us understand what is happening. We derive the same benefits that we derive when type checking is available on server.
Having a separate client side model also allows us to adapt such model to client side UI needs (albeit sometime we create viewmodel to satisfy such requirements)
The approach of generating these client side contracts, as highlighted by #Ivan can help us to reduce the overall effort.
In one of my views, I have a ViewModel which I populate from two tables, and then bind a List<ViewModel> to an editable GridView (ASP.NET Web Forms).
Now I need to send that edited List<ViewModel> back to the Services layer to update it in the database.
My question is - is it Okay to send the ViewModel back to Services, or should it stay in the Presentation? If not - should I better use a DTO? Many thanks.
Nice question !
After several (hard) debates with my teammates + my experience with MVC applications, I would not recommend to pass viewmodel to your service / domain layer.
ViewModel belongs to presentation, no matter what.
Because viewModel can be a combination of different models (e.g : 1 viewModel built from 10 models), your service layer should only work with your domain entities.
Otherwise, your service layer will end up to be unusable because constrained by your viewModels which are specifics for one view.
Nice tools like https://github.com/AutoMapper/AutoMapper were made to make the mapping job.
I would not do it. My rule is: supply service methods with everything they need to do their job and nothing more.
Why?
Because it reduces coupling. More often than not service methods are addressed from several sources (consumers). It is much easier for a consumer to fulfil a simple method signature than having to build a relatively complex object like a view model that it otherwise may have nothing to do with. It may even need a reference to an assembly it wouldn't need otherwise.
It greatly reduces maintenance effort. I think an average developer spends more than 50% of his time inspecting and tracking existing code (maybe even much more). Now everybody knows that looking for something that is not there takes disproportionally much time: you must have been everywhere to be sure. If a method receives arguments (or object with properties) that are not used directly or further down the call stack you or others will walk this long road time and again.
So if there is anything in the view model that does not play a part in the service method, don't use it to call the method.
Yes. I am pretty sure it is ok.
Try to use MS Entity Framework and it will help you allots.
So, I need some input refactoring an asp.net (c#) application that is basically a framework for creating dynamic forms (any forms). From a high level point of view, there is a table that has the forms, and then there is a table that has all the form fields, where it is one to many between the two. There is a validation table, where each field can have multiple types of validation, and it is a one to many from the form fields table to the validation table.
So the issue is that this application has been sold as the be-all-end-all customizable solution to all the clients. So, the idea is whatever form they want, we can build it jsut using DB configurations. The thing is, that is not always possible, because there is complex relationship between the fields, and complex relationship between the forms themselves. Also, there is only once codebase, and this is for multiple clients - all of whom host it on their own. There is very specific logic for each of the clients, and they are ALL in the same codebase, with no real separation. Sometimes it was too difficult to make it generic, so there are instances where it has hard coded logic (as in if formID = XXX then do _). You can also have nested forms, as in, one set of fields on its own within each form.
So usually, when one client requests a change, we make the change and deploy it to that client - but then another client requests a different change, and we make the change and deploy it for THAT client, but the change from the earlier client breaks it, and its a headache trying to debug, because EVERYTHING is dynamic. There is no way we can rollback the earlier change, because then the other client would be screwed.
Its not done in a real 3-tier architecture - its a web site with references to a DB class, and a class library. There is business logic in the web site itself, in the class library, and the database stored procs (Validation is done in the stored procs).
I've been put in charge of re-organizing the whole thing, and these are my thoughts/questions:
I think this is a bad model in general, because one of the things I heard one of the developers say is that anytime any client makes a change, we should deploy to everybody - but that is not realistic, if we have say 20 clients - there will need to be regression testing on EVERYTHING, since we don't know the impact...
There are about 100 forms in total, and their is some similarity in them (not much). But I think the idea that a dynamic engine can solve ALL form requests was not realistic as well. Clients come up with the most weird requests. For example, they have this engine doing a regular data entry form AND a search form.
There is a lot of preserving state between pages, and it is all done using session variables, which is ok, except that it is not really tracked, and so sessions from the same user keep getting overwritten, and I think sessions should be got rid of.
Should I really just rewrite the whole thing? This app is about 3 years old, and there has been lots of testing and things done, and serious business logic implemented, so I hate to get rid of all that (joel's advice). But its really a mess of a sphagetti code, and everything takes forever to do, and things break all the time because of minor changes.
I've been reading Martin Fowlers "Refactoring" and Michael Feathers "working effectively with legacy code" - and they are good, but I feel they were written for an application that was 'slightly' better architected, where it is still a 3-tiered architecture, and there is 'some' resemblance of logic..
Thoughts/input anyone?
Oh, and "Help!"
My current project sounds like almost exactly the same product you're describing. Fortunately, I learned most of my hardest lessons on a former product, and so I was able to start my current project with a clean slate. You should probably read through my answer to this question, which describes my experiences, and the lessons I learned.
The main thing to focus on is the idea that you are building a product. If you can't find a way to implement a particular feature using your current product feature set, you need to spend some additional time thinking about how you could turn this custom one-off feature into a configurable feature that can benefit all (or at least many) of your clients.
So:
If you're referring to the model of being able to create a fully customizable form that makes client-specific code almost unnecessary, that model is perfectly valid and I have a maintainable working product with real, paying clients that can prove it. Regression testing is performed on specific features and configuration combinations, rather than a specific client implementation. The key pieces that make this possible are:
An administrative interface that is effective at disallowing problematic combinations of configuration options.
A rules engine that allows certain actions in the system to invoke customizable triggers and cause other actions to happen.
An Integration framework that allows data to be pulled from a variety of sources and pushed to a variety of sources in a configurable manner.
The option to inject custom code as a plugin when absolutely necessary.
Yes, clients come up with weird requests. It's usually worthwhile to suggest alternative solutions that will still solve the client's problem while still allowing your product to be robust and configurable for other clients. Sometimes you just have to push back. Other times you'll have to do what they say, but use wise architectural practices to minimize the impact this could have on other client code.
Minimize use of the session to track state. Each page should have enough information on it to track the current page's state. Information that needs to persist even if the user clicks "Back" and starts doing something else should be stored in a database. I have found it useful, however, to keep a sort of breadcrumb tree on the session, to track how users got to a specific place and where to take them back to when they finish. But the ID of the node they're actually on currently needs to be persisted on a page-by-page basis, and sent back with each request, so weird things don't happen when the user is browsing to different pages in different tabs.
Use incremental refactoring. You may end up re-writing the whole thing twice by the time you're done, or you may never really "finish" the refactoring. But in the meantime, everything will still work, and you'll have new features every so often. As a rule, rewriting the whole thing will take you several times as long as you think it will, so don't try to take the whole thing in a single bite.
I have a number of similar apps for building dynamic forms that I support.
There's a whole lot of things you could/could not do & you're right to think hard before throwing away 3 years of testing/development.
My input for you to consider is to implement a plug-in architecture on top of what you're got. Any custom code for a form goes in the plug-in & the name of this plug-in is stored with the form. When you generate a form, the correct plug-in is called to enhance the base functionality. that way you get to move all the custom code out of the existing library. It should also mean less breaking changes, each plug-in only affects the form it's attached to.
From that point it'll be easy to refactor the core engine as it's common functionality across all clients & forms.
Since your application seems to have become a big ball of mud, a complete (or an almost complete rewrite) might make sense.
You should also take into account new technologies like document-oriented databases (couchDB, MongoDB)
Most of the form definitions could probably fit pretty well in document-oriented databases. For exemple:
To define a customer form, you could use a document that looks like:
{Type:"FormDefinition",
EntityType: "Customer",
Fields: [
{FieldName:"CustomerName",
FieldType:"String",
Validations:[
{ValidationType:"Required"},
{ValidationType:"StringLength", Minimum:15, Maximum:50},
]},
...
{FieldName:"CustomerType",
FieldType:"Dropdown",
PossibleValues: ["Standard", "Valued", "Gold"],
DefaultValue: ["Standard"]
Validations:[
{ValidationType:"Required"},
{
ValidationType:"Custom",
ValidationClass:"MySystem.CustomerName.CustomValidations.CustomerStatus"
}
]},
...
]
};
With this kind of document to define your forms, you could easily add forms and validations which are customer specific.
You could easily add subforms using a fieldtype of SubForm or whatever.
You could define FieldTypes for all common types of fields like e-mail, phone numbers, address, etc.
namespace System.CustomerName.CustomValidations {
class CustomerStatus: IValidator {
private FormContext form;
private List<ValidationErrors> validationErrors;
CustomerStatus(FormContext fc) {
this.validationErrors = new List<ValidationErrors>();
this.form = fc;
}
public List<ValidationErrors> Validate() {
if (this.formContext.Fields["CustomerType"] == "Gold" && Int.Parse(this.form.Fields["OrderCount"]) < 10) {
this.validationErrors.Add(new ValidationError("A gold customer must have at least 10 orders"))
}
if (this.formContext.Fields["CustomerType"] == "Valued" && Int.Parse(this.form.Fields["OrderCount"]) < 5) {
this.validationErrors.Add(new ValidationError("A valued customer must have at least 5 orders"))
}
return this.validationErrors;
}
}
}
A record of a document with that definition could look like this:
{Type:"Record",
EntityType: "Customer",
Fields: [
{FieldName:"CustomerName", Value:"ABC Corp.",
{FieldName:"CustomerType", Value:"Gold",
...
]
};
Sure, this solution is a lot of work, but if/when realized it could be really easy to create/update/customize forms.
This is a common but (IMO) somewhat naive design approach. "Instead of solving the customer's problem, let's build a tool to let them solve their own problems!". But the reality is, that generally customers want YOU to solve their ACTUAL problems. So build things that solve their problems.
If you can architect it in a way that allows you to reuse some parts for different customers, fine. But that is generally what the frameworks have done for you already - work out the common features that applications need and make them available in neat packages.
I am currently working on several flex projects, that have gone in a relative short amount of time from prototype to almost quite large applications.
Time has come for some refactoring to be done, so obviously the mvc principle came into mind.
For reasons out my control a framework(i.e. robotlegs etc.) is not an option.
Here comes the question...what general guidelines should I take into consideration when designing the architecture?
Also...say for example that I have the following: View, Ctrl, Model.
From View:
var ctrlInstance:Ctrl= new Ctrl();
ctrl.performControllerMethod();
In Controller
public function performControllerMethod():void{
//dome some sort of processing and store the result in the model.
Model.instance.result = method_scope_result;
}
and based on the stored values update the view.
As far as storing values in the model, that will be later used dynamically in the application, via time filtering or other operation, everything is clear, but in cases where data just needs to go in(say a tree that gets populated once at loading time), is this really necessary to use the view->controller->model->view update scheme, or can I just make the controller implement IEventDispatcher and dispatch some custom events, that hold necessary data, after the controller operation finished.
Ex:
View:
var ctrlInstance:Ctrl= new Ctrl();
ctrl.addEventListener(CustomEv.HAPPY_END, onHappyEnd);
ctrl.addEventListener(CustomEv.SAD_END, onSadEnd);
ctrl.performControllerMethod();
Controller
public function performControllerMethod():void{
(processOk) ? dispatchEvent(new CustomEv(CustomEv.HAPPY_END, theData)) : dispatchEvent(new CustomEv(CustomEv.SAD_END));
}
When one of the event handlers kicks into action do a cleanup of the event listeners(via event.currentTarget).
As I realize that this might not be a question, but rather a discussion, I would love to get your opinions.
Thanks.
IMO, this whole question is framed in a way that misses the point of MVC, which is to avoid coupling between the model, view, and controller tiers. The View should know nothing of any other tier, because as soon as it starts having references to other parts of the architecture, you can't reuse it.
Static variables that are not constant are basically just asking for trouble http://misko.hevery.com/2009/07/31/how-to-think-about-oo/. Some people believe you can mitigate this by making sure that you only access these globals by using a Controller, but as soon as all Views have a Controller, you have a situation where the static Model can be changed literally from anywhere.
If you want to use the principles of a framework without using a particular framework, check out http://www.developria.com/2010/04/combining-the-timeline-with-oo.html and http://www.developria.com/2010/05/pass-the-eventdispatcher-pleas.html . But keep in mind that established frameworks have solved most of the issues you will encounter, you're probably better off just using one.
I am working on a design spec for a new application that will be heavily workflow driven.
Before I re-invent the wheel, is there a decent lightweight workflow engine that plugs into ASP.NET already around?
Basically, I'm looking for something that handles moving through a defined set of workflow pages while handling state management automatically.
If this isn't around already, I'll definitely try to abstract the engine from my app and put it on codeplex, as it would be really handy.
Any suggestions?
Note: .NET 2.0, so no WWF, though I think WWF is overkill for my needs.
EDIT: Seems like there is a legitimate need for this, and there isn't a product out there...So I might build this.
Here is what I'm picturing:
Custom Page class called WebFlowPage
All WebFlowPage's are registered in a Workflow mapper.
Each WebFlowPage has some form of state object.
A HttpHandler handles picking the appropriate WebFlowPage based upon the workflow, and populating it from the state object.
Is the workflow dynamic, or static?
If the workflows are simple, you could roll your own workflow engine.
In certain situations, it can be fairly simple, and just a couple of data tables to handle the rules, processing and state.
Alot of workflow engines are built for large scale processing (credit card applications, for example). For small scale, you should at least consider your own, which would eliminate the overhead and dependency of/on an engine.
Not sure exactly what you wish to do here, but Ra-Ajax can easily keep state at least if you want your solution ajaxified...
For reference purposes you might want to check out the Ajax Calendar sample or even the (banalistically implemented) Ajax Wizard sample. It surely beats the hell out of doing it with JavaScript...
And every time you "do something" you're in "server-land" which means you can store temporaries all the time as you wish...
The project is LGPL
(PS!
Yes I do work with it)
Building a custom workflow engine is not trivial, although it may seem simple at first. We've tried that. It depends a lot on the complexity of the logic you need it to cover.
Given the current state of the Windows Workflow Foundation and the lack of another framework that abstracts the workflow concepts, I would choose WF if you need complex logic, asynchronous handling or branches in your workflows.
Tracking your state through the workflow can be accomplished by carrying some kind of xml payload or storing the state in a database,
If your workflow is actually a sequential set of forms that need to be filled in by the user, tracking the steps and guiding the user to the next step can be accomplished with some simple custom solution.
You could take a look at the InRule engine too.
Also, there is nxBRE.
These too are mostly used for business rules.
InRule is proprietary, whereas nxBRE supports RuleML (the defacto standard).
You might need to make your own implementation for the pages, and use the rule engine as the "structure".
At this moment, I know that Sharepoint 2007 supports page workflows (using WF), but this would imply using .NET Framework 3 and deployng sharepoint.
My suggestion would be to use whatever you find more light and easier to use.
I think the term "workflow" is very open to interpretation. I have been working lately with a type of workflow that is very different from what you seem to be describing. Mine is a state machine based workflow where the state of a particular record determines what actions a user can take to move the record to the next step in the business process. So "workflow" in this instance means how the record flows from one state to another until it is finally completed.
Your usage of workflow seems to have more to do with moving a user from one page to another in a linear multi-step process, which is a completely different use case (correct me if I'm wrong). So before coming up with a general purpose "workflow" engine that anyone could use, I would recommend defining a little bit better exactly what types of situations this system would handle.
I've been using this for a few months http://objectflow.codeplex.com. Not asp specific but it may fit your needs
While browsing the web for some workflow & BPM resources, I found the following project: NetBPM. Unfortunately, the project seems to be stopped.
I don't think there is a workflow engine that will automatically handle state for you, but if you are moving through a set of pages like a process such as checkout on an ecommerce site, perhaps the ASP.NET wizard control could help you?
There are few workflow options. "Aspose" and "Skelta" are the offers I´m evaluating.
Fábio
you can use WorkFlow Engine, just read the document and run the Demo.
all of the features you need for a dynamic workflow engine they added in there.