Using Rule Flow in InRule for Workflow - rules

I see Rule Flow which supports action so it may be possible to build some types of workflow on top of this. In my situation I have an case management application with tasks for different roles, all working on a "document" that flows through different states and depending on state, different role will see in their queue to work on.

I'm not sure what your question is, but InRule comes with direct support for Windows Workflow Foundation, so executing any InRule RuleApplication, including those with RuleFlow definitions, is certainly possible.
If you'd like assistance setting up this integration, I would suggest utilizing the support knowledge base and forums at http://support.inrule.com
Full disclosure: I am an InRule Technology employee.

For case management scenarios, you can use decisions specifically to model a process. Create a custom table or flags in your cases that depict the transition points in your process (steps). As you transition steps, call a decision which will determine if the data state is good enough to make the transition. If it is, then set the flag for the new state. Some folks allow for multiple states at the same time. InRule is a stateless platform; however, when used with CRM it provides 95% of the process logic and relies on CRM to do the persistence. I have written about this pattern in a white paper:
https://info.inrule.com/rs/inruletechnology/images/White_Paper_InRule_Salesforce_Integration.pdf

Related

Best Practice for handling related entities in Application Service / (Microservice)

I have gone through few posts like THIS and THIS, etc. but I couldn't clarify a confusion with respect to design that's in my mind. What would be the best practice to add CRUD for related entities when working with micro services? (my question is generally related to microservices, but currently I'm using aspboilerplate, to be specific)
For instance, I have an entity Product having around seven to ten properties in it. And then we may have another entity ProductContract. Each product can have multiple contracts added/updated.
Question: is it a good idea to create a separate Application Service for ProductContract entity? as it is going to have it's own input/output Dtos and logic, etc.
OR
It is better to create a ProductContractManager and keep its logic there. Then inject this manager into Product Application Service? This way, Product Application Service would have methods like 'AddProductContract', 'UpdateProductContract', etc.
The main driver of whether ProductContract should be owned by a different service is the consistency requirements, because achieving stronger levels of consistency between microservices is difficult (especially without tying the services together at the DB level: if going down that way, the decision to have separate microservices should probably also be revisited). So the main question you want to ask is, if updating/deleting a Product can there be a window of time where that update/delete isn't reflected in ProductContract (and vice versa)?
If the answer to that is that it can't, then you'll save yourself a lot of trouble by just putting responsibility for ProductContract in the same service that has responsibility for Product. If eventual consistency is allowed, then a separate service might be called for: the consideration there isn't really technical but more related to human organization (who will be developing and maintaining the functionality) and expectations around whether ProductContract and Product will evolve together (functionality changes in one being likely to also lead to functionality changes in the other).

Implementing process workflow in PureMVC

I'm looking for suggestions regarding implementing process flow / work flow management in a PureMVC based application.
Our Flex application includes a number of processes such as account creation, payment processing, etc.
Within our team, there is some discussion of how rigidly we should adhere to the PureMVC model.
Within the PureMVC model, it seems reasonable that the current state in the process could be managed in a Proxy.
Commands are clearly responsible for processing the actions required of each node and for node transitions.
Mediators for managing the UI.
However, I think that there is an important bit still missing here: a ProcessController.
The approaches we've reviewed all seem to either violate the PureMVC model (even just slightly) or make unreadable code.
A proxy would maintain the state of the process. As such, it seems to be an appropriate way to implement the controller. However, this is putting a lot of business logic into the proxy.
The Mediator space makes more sense, but the controller in that space would not necessarily directly interact with any particular UI element but would instead coordinate/delegate to dedicated Mediators.
Yet another model would have us put process transition information into Commands. While this seems to be the best place for that work (given the role of commands relative to proxies and mediators), this approach looks to make some particularly heinous looking code with process transition logic distributed among scores of commands.
So how have others handled this problem?
Thank
Curtis
This is exactly the problem that PureMVC StateMachine Utility (and Finite State Machines in general) are meant to solve.
In a simple XML format, you can define states, valid transitions to other states, and the actions that trigger those transitions.
It is all notification-based, so you send StateMachine.ACTION notifications that cause the StateMachine to execute any entering/exiting guard logic that may be necessary (e.g., only allow exiting the FORM_ENTRY state if all the data valid, or only allow entry into the FORM_PROCESSING state if the user has admin rights, etc.). If a state change occurs, notifications are sent that can be used to organize the view or execute logic upon entering the new state.
Here is a StateMachine Overview presentation that will give you a better idea
http://puremvc.tv/#P003/
I think your idea of 'ProcessController' is probably the better way of doing it. Personally, I'm not a fan of PureMVC and don't use it because it doesn't allow enough flexibility or extension points to help with such problems.
Frankly, it's hard to advise on your issue because I don't know exactly what you're trying to accomplish. I do agree that the concern needs to be handled by one object. If you can, try to create a model that can store the data for the process and have another class just manage it throughout. Not sure if that makes sense, but then again, your problem isn't very clear either.
As an added extra, I would look into Dependency Injection. I'm trying to remember if PureMVC does it (I don't think it does), but with DI it would of been a fairly simple problem to solve. Frameworks like Parsley and Robotlegs are really good at that.
In pureMVC the State Machine Utility is probably the best choice for a process controller - and btw, according to the Implementation Idioms & best Practices doc. for PureMVC, it's perfectly fine to have mediators that don't manage a visible component

Data Binding in 3 layered architecture?

Does data binding fit in a 3 layered architecture? Is dropping a grid-view on a web form and binding it to a LinkDataSource or SQLDataSource considered bad? The way I see it, that's the Presentation Layer talking to the Data Access Layer. I once heard a "professional developer" say never ever do this, so what's the alternative if you shouldn't?
The way you are doing is ok if it is a small project, but if you want your app. to have flexibility to support Windows/ Web in future then you must use Layers.
Please follow this link http://www.dotnetspider.com/resources/1566-n-Tier-Architecture-Asp.aspx
You should have a middle tier between Presentation and Data Access layers, the middle tier is pulled out from the presentation tier and, as its own layer, it controls an application’s functionality by performing detailed processing.
The main task of Business layer is business validation and business workflow.
When you build your business logic components into an SDK, you are effectively disconnecting it from your Web application, and any input validation that it performs. Therefore, your business logic components are the last line of defense to make sure that only valid values make it into your database.
Databinding is, of course, necessary to effectively dispay data.
Tooling is great and can boost productivity. It is equally important to understand what the tooling is generating, even at a basic level, in order to be able to effectively utilize the generated code.
The reaction you describe seems a bit extreme. If a wizard can generate some code that works for ya, then use it. If you don't understand the generated code then that is the next priority; learn about what it is doing and why. In the meantime, you have a page that people can put eyes on regardless of how it got there.
I am a bit pragmatic when it comes to tools. You do what you have to do. But, if after [insert appropriate internship length] you are still using code gen and cannot customize or fix it then you (as in the royal you, not the you you) are being lazy or stupid or both. ;-)
OT:(almost) Never say never unless you want to lessen the impact of what you are trying to communicate.
my 2 pesos.
When you're doing a small project or a prototype, go with the LINQDataSource or SQLDataSource. However, the downsides of those data sources are serious enough for you to think hard if they are appropriate. If your doing a multi-layered or multi-tear architecture, they simply don't fit. But even if your architecture isn't that strict, you should ask yourself how big this application is going to be and how likely it is going to be that you will make changes to the system in the future. How much time it is going to take you when you want to make a change to the database?
I've seen projects were the developers used those data sources, because those were the constructs that were used in those nice ASP.NET video's. However, when the projects grown from prototypes to big production applications (yes, I’ve seen it happen, the prototype seemed good enough), the lack of compile time support (your queries are defined in markup!) made it very hard to do any change to the system.
When you need to make a change to the system, that will be the time that you’ll see that the cost of the change is a magnitude bigger than the time you saved by flattering your architecture.

How to balance DRY principle with minimizing dependencies?

I'm having a problem with the DRY principle (Don't Repeat Yourself) and minimizing dependencies that revolves around Rete rules engines.
Rules engines in large IT organizations tend to be Enterprise (note the capital "E" - that's serious business). All rules must be expressed once, nice and DRY, and centralized in an expensive rules engine. A group maintains the rules engine and are the keepers of the rules sets.
When that IT organization is part of an American insurance company, there tend to be lots of rules. There are rules that apply to all states and products, but each state tends to evolve its own laws for different products, so the rules need to reflect these quirks. The categories are many: actuarial, underwriting, even for ordering credit and motor vehicle reports from 3rd party bureaus.
The problem that I have from a design standpoint is that centralizing rules and processing is certainly nice and DRY, but there are costs:
Additional network hops to access the centrally located rules service and return results;
Additional complexity if the rules engine is exposed as a SOAP web service - consumers have to package up SOAP requests and OXM the response back to their own domain;
Additional interfaces between the enterprise group that maintains the rules engine, the business that sets and maintains the rules, and the developers that consume them;
Additional complexity - sometimes a data-driven solution might be enough.
Additional dependencies - components who don't have control of their own rules have to worry about external dependencies on the rules engine for testing, deployment, releases, etc.
These problems crop up with lots of other Enterprise technologies (e.g., B2B gateways, ESBs, etc.)
The same Enterprise groups also tout SOA as a foundational principle. But my understanding of proper service design is that they should tile the business space and be idempotent, independent, and isolated. How can a service be independent and isolated if its rules are maintained somewhere else?
I'd like to err on the side of simplicity, arguing that eliminating dependencies should take precedence over centralization if the rules can be shown to apply only in isolated circumstances. I'm not sure the argument will win the day.
So my questions are:
Where do you fall on the centralization versus independence argument?
What's your experience with Enterprise tools like rules engines?
How can I make the argument for isolation stronger?
If my view is incorrect, what argument would you make in favor of centralization?
In the long run, easy maintenance of the whole thing would be an absolute requirement.
So DRY should be honoured at all cost even if that involves a loss of performance here and there, some additional configuration issues and other "minor" problems.
Also "independent" is different than "self-contained".
Otherwise imagine the situation when you need to change something and you have to contact a lot of different parties to force them to update. With DRY you also solve the problem of having incompatible versions running at the same moment for a brief period of time.
So
Centralization > Indepenence (at least in the system you describe)
Single point of truth for rule engines (everybody on the same page)
Remind them the cost of maintenance as years pass
I find your view correct.
Your question is very Enterprise-specific, and I'm more into desktop stuff, so I hope this answer is not too general.
I liked the concept of Don't Repeat Yourself, until I found out how it was being codified and ossified. I liked it because it agreed with me (duh!) and my own ideas about how to make code more maintainable and less error-prone.
Basically, I see greater maintainability as requiring more of a learning curve on the part of the maintainer. I don't think there's an easy way around that. Here's an example of how to increase maintainability by a good factor, but not without a learning curve.

Architecture for Satellite Parts of a Larger Application

I work for a firm that provides certain types of financial consulting services in most states in the US. We currently have a fairly straightforward CRUD application that manages clients and information about assets and services we perform for each. It only concerns itself with the fundamental data points and processes that are common to all locations--the least common denominator.
Now we want to implement support for tracking disparate data points and processes that vary from state to state while preserving the core nationally-oriented system. Like this:
(source: flickr.com)
The stack I'm working with is ASP.Net and SQL Server 2008. The national application is a fairly straightforward web forms thing. Its data access layer is a repository wrapper around LINQ to SQL entities and datacontext. There is little business logic beyond CRUD operations currently, but there would be more as the complexities of each state were introduced.
So, how to impelement the satellite pieces...
Just start glomming on the functionality and pursue a big ball of mud
Build a series of satellite apps that re-use the data-access layer but are otherwise stand-alone
Invest (money and/or time) in a rules engine (a la Windows Workflow) and isolate the unique bits for each state as separate rule-sets
Invest (time) in a plugin framework a la MEF and implement each state's functionality as a plugin
Something else
The ideal user experience would appear as a single application that seamlessly adapts its presentation and processes to whatever location the user is working with. This is particularly useful because some users work with assets in multiple states. So there is a strike against number two.
I have no experience with MEF or WF so my question in large part is whether or not mine is even the type of problem either is intended to address. They both kinda sound like it based on the hype, but could turn out to be a square peg for a round hole.
In all cases each state introduces new data points, not just new processes, so I would imagine the data access layer would grow to accommodate the addition of new tables and columns, but I'm all for alternatives to that as well.
Edit: I tried to think of some examples I could share. One might be that in one state we submit certain legal filings involving client assets. The filing has attributes and workflow that are different from other states that may require similar filings, and the assets involved may have quite different attributes. Other states may not have comparable filings at all, still others may have a series of escalating filings that require knowledge of additional related entities unique to that state.
Start with the Strategy design pattern, which basically allows you outline a "placeholder", to be replaced by concrete classes at runtime.
You'll have to sketch out a clear interface between the core app and the "plugins", and you have each strategy implement that. Then, at runtime, when you know which state the user is working on, you can instantiate the appropriate state strategy class (perhaps using a factory method), and call the generic methods on that, e.g. something like
IStateStrategy stateStrategy = StateSelector.GetStateStrategy("TX"); //State id from db, of course...
stateStrategy.Process(nationalData);
Of course, each of these strategies should use the existing data layer, etc.
The (apparent) downside with this solution, is just that you'll be hardcoding the rules for each state, and you cannot transparently add new rules (or new states) without changing the code. Don't be fooled, that's not a bad thing - your business logic should be implemented in code, even if its dependent on runtime data.
Just a thought: whatever you do, completely code 3 states first (with 2 you're still tempted to repeat identical code, with more it's too time-consuming if you decide to change the design).
I must admit I'm completely ignorant about rules or WF. But wouldn't it be possible to just have one big stupid ASP.Net include file with instructions for states separated from main logic without any additional language/program?
Edit: Or is it just the fact that each state has quote a lot a completely different functionality, not just some bits?

Resources