PegaRULES mining and testing - pega

new to the job and I'm sourcing for information relating to PegaRULES. Hope someone with first-hand experience using Pega can help me please.
My business rules are in a COBOL-system. Is there any way Pega can facilitate the transfer of rules over to their system or must each business rule be entered all over.
Should I switch vendors subsequently, how can the business rules in PegaRULES be exported out?
What is the scope of testing PegaRULES is able to do? I went through a few of their online courses and I'm certain it can verify and validate rules. What about testing the integration between Pega and other applications?

It's not possible to simply import and export rules for different systems.
The rules in a Pega system are not only "business rules", but e.g. "user interface rules", "process rules" and "data model rules" as well. I don't see how you would split up your COBOL code into e.g. section rules that can be used to autogenerate HTML5.
Regarding your second question: it's not possible to use the exported Pega code (rules) on different systems. What you can plan for however is offering the BPM case management and processing via REST or SOAP service for other systems to consume.

Related

Opengroup SOA ontology service vs service interface vs service contract

I am trying to understand the definitions in this document.
http://www.opengroup.org/soa/source-book/ontologyv2/service.htm
Their definitions of service, service interface and service contract are either unclear or seem different from what I normally encounter.
Service:
“A service is a logical representation of a repeatable activity that
has a specified outcome. It is self-contained and is a ‘black box’ to
its consumers.”
Lets say I have a WCF project and it has two Operations
StoreFront
+GetPrice
+AddToCart
The definition says "a repeatable activity". So is the service StoreFront? Or do I have two services (GetPrice and AddToCart).
Service Contract:
Has an "effect" class. Is the effect "return price" and " added to cart" ?
From the same article:
“A capability offered by one entity or entities to others using
well-defined ‘terms and conditions’ and interfaces.” (Source: OMG
SoaML Specification - my italics)
This is in my opinion a preferable defnition than the one talking about "repeatable activities".
The key word in the definition is capability. Capability refers to Business Capability which is a carry-over from the BPM industry, but in an SOA context refers to a business domain with distinct boundaries.
So from this definition we can surmise that services should be exposed or should operate within a business capability/process boundary. This leads us towards the idea (from the principals or tenants of SOA) that services should be autonomous within well defined boundaries.
In your example, you are asking
So is the service StoreFront? Or do I have two services (GetPrice and
AddToCart)
The answer to that as always is "it depends". However, generally Pricing (GetPrice) would belong to a different business capability to Ordering (AddToCart). Additionally, the operations differ in some other important ways:
GetPrice is a read operation, while AddToCart is a write operation.
GetPrice is a synchronous operation, while AddToCart could very well be asynchronous
So from these we should probably assume that they are two different services from a business perspective.
This assumption has some radical repercussions. If they are two services, then according to SOA they should be autonomous. Meaning that we should be looking to minimize coupling between the services in every possible way, so that as much as possible they can be planned, developed, tested, built, deployed, hosted, supported, and managerd as separate concerns.
Another repercussion is that when you physically separate services to this extent, how can you show this stuff together to your users? They may be different capabilities but they still need to work together on the screen.
Additionally, from a back end perspective Ordering needs to know about Pricing data, otherwise how can order fulfillment happen? If you've separated the database into two, how can the Checkout service know how much stuff costs, what discounts to apply, etc?
I have posted about this stuff before, so please feel free to have a read. I would recommend reading the excellent article on Microservices by Lewis and Fowler also.

Routing/filtering messages without orchestrations

A lot of our use cases for Biztalk involve simply mapping and routing HL7 2.x messages from one system to another. Implementing maps and associating them to send/recieve ports is generally straightforward, but we also need to do some content based filtering on the sending side.
For example, we may want to only send ADT A04 and ADT A08 messages to system X if the sending facility is any 200 facilities (out of a possible 1000 facilities we have in our organization), but System Y needs ADT A04, A05, A8 for a totally different set of facilities and only for renal patients.
Because we're just routing messages and not really managing business processes here, utilzing orchestrations for the sole purpose to call out to the business rule engine is a little overkill here, especially considering that we'd probably need a seperate orchestration for each ADT type because of how schemas work. Is it possible to implement filter rules like this without using using orchestrations? The filters functionality of send ports looks a little too rudimentary for what we need, but at the same time I'd rather not develop and manage orchestrations.
You might be able to do this with property schemas...
You need to create a property schema and include the properties (from the other schemas) that you want to use for routing. Once you deploy the schema, those properties will be available for use as a filter in the send port. Start from here, you should be able to find examples somewhere...
As others have suggested you can use a custom pipeline component to call the Business Rules Engine.
And rather then trying to create your own, there is already an open source one available called the BizTalk Business Rules Engine Pipeline Framework
By calling BRE from the pipeline you can create complex rules which then set simple context properties on which you can route your messages.
Full disclosure: I've worked with the author of that framework when we were both at the same company.

Workflow Foundation - Can I make it fit?

I have been researching workflow foundation for a week or so now, but have been aware of it and the concepts and use cases for it for many years, just never had the chance to dedicate any time to going deeper.
We now have some projects where we would benifit from a centralized business logic exposed as services as these projects require many different interfaces on different platforms I can see the "Business Logic Silos" occuring.
I have had a play around with some proof of concepts to discover what is possible and how it can be achieved and I must say, its a bit of a fundamental phase shift for a regular C# developer.
There are 3 things that I want to achieve:
Runtime instanciated state machines
Customizable by the user (perform different tasks in different orders and have unique functions called between states).
WCF exposed
So I have gone down the route of testing state machine workflows, xamlx wcf services, appfabric hosted services with persistance and monitoring, loading xamlx services from the databse at runtime, etc, but all of these examples seem not to play nicely together. For example, a hosted state machine service, when in appfabric, has issues with the sequence of service method calls such as:
"Operation 'MethodName' on service instance with identifier 'efa6654f-9132-40d8-b8d1-5e611dd645b1' cannot be performed at this time. Please ensure that the operations are performed in the correct order and that the binding in use provides ordered delivery guarantees".
Also, if you call instancial workflow services at runtime from an sql store, they cannot be tracked in appfabric.
I would like to Thank Ron Jacobs for all of his very helpful Hands On Labs and blog posts.
Are there any examples out there that anyone knows of that will tie together all of these concepts?
Am I trying to do something that is not possible or am I attempting this in the right way?
Thanks for all your help and any comments that you can make to assist.
Nick
Regarding the error, it seems like you have modified the WF once deployed (is that #2 in your list?), hence the error you mention.
Versioning (or for this case, modifying a WF after it's been deployed) is something that will be improved in the coming version, but I don't think it will achieve what you need in #2 (if it is what I understood), as the same WF is used for every instance.

"What makes a good BizTalk project"

"What makes a good BizTalk project" is a question I was asked recently by a client's head of IT. It's rather open ended, so rephrasing it slightly to :
"what are you top ten best practices for a BizTalk 2006 and onwards projects - not limited to just technical practices, eg organisational"
I wrote an article called "Top 10 BizTalk Server Mistakes" that covers some key best practices in terms is usable information rather than a simple list. Here's the listing:
Using orchestrations for everything
Writing custom code instead of using existing adapters
Using non-serializable types and wrapping them inside an atomic transaction
Mixing transaction types
Relying on Public schemas for private processing
Using XmlDocument in a pipeline
Using ‘specify now' binding
Using BizTalk for ETL
Dumping debug/intermediate results to support debugging
Propagating the myth that BizTalk is slow
...and the link to the complete article: [Top 10 BizTalk Server Mistakes] (http://artofbabel.com/columns/top-x/49-top-10-biztalk-server-mistakes.html)
The key point is to emphasize to the client that BizTalk is a swiss army knife for interop... an expensive swiss army knife. A programmer can wire up two enterprise systems with a WCF application as fast as you can with BizTalk. The key things to include/require when using BizTalk is to:
Have more than simple point integrations. If this is all you have, fine, see the rest.
Have all or a portion if a process that is valuable going theough BizTalk so that you can instrument it with BAM and provide process monitoring to the organization... maybe even some BI.
If you are implementing a one to many or many to one scenario, use of the BizTalk ESB patterns will pay deividends in th elong run
When there are items that need to be regularly tweaked - threshholds, URI'ss, etc... use of the Business Rules Engine can provide an easily maintainable solution.
When endpoints might be semi connected, BizTalk bakes in queueing of messages for no extra effort.
Complicated correlations or ordering of messages.
Integrating with exisitng enterprise systems can be simplified with the adapter packs provided as part of BizTalk. This alone can save big bucks. Asking Oracle, PeopleSoft or Siebel folks about XML and Web Services can be a challenging experience. The adapters get you and BizTalk through the enterprise apps' front door and reduces the work for themsignifcantly.
There are more I just can't think of at midnight.
Any of these items make BizTalk a winning candidate because so much of it is given to you with the platform. If you are not being required to provide any of these, you should really attempt to deliver some of these beneftis in highly visible way to the client. if you don't it's just an expensive and under utilized swiss army knife.
I'll start with Environment and Deployment Planning. Especially testing deployment and matching your QA/Stage (whatever the pre-production environment is) to the production environment so you don't find out some weirdness at midnight when you are trying to go live.

Scalable/Reusable Authorization Model

Ok, so I'm looking for a bit of architecture guidance, my team is getting a chance to re-cast certain decisions with a new feature that we're building, and I wanted to see what SO thought :-) There are of course certain things that we're not changing, so the solution would have to fit in this model. Namely, that we've got an ASP.NET application, which uses web services to allow users to perform actions on the system.
The problem comes in because, as with many systems, different users need access to different functions. Some roles have access to Y button, and others have access to Y and B button, while another still only has access to B. Most of the time that I see this, developers just put in a mish-mosh of if statements to deal with the UI state. My fear is that left unchecked, this will become an unmaintainable mess, because in addition to putting authorization logic in the GUI, it needs to be put in the web services (which are called via ajax) to ensure that only authorized users call certain methods.
so my question to you is, how can a system be designed to decrease the random ad-hoc if statements here and there that check for specific roles, which could be re-used in both GUI/webform code, and web service code.
Just for clarity, this is an ASP.NET web application, using webforms, and Script# for the AJAX functionality. Don't let the script# throw you off of answering, it's not fundamentally different than asp.net ajax :-)
Moving from the traditional group, role, or operation-level permission, there is a push to "claims-based" authorization, like what was delivered with WCF.
Zermatt is the codename for the Microsoft class-library that will help developers build claims-based applications on the server and client. Active Directory will become one of the STS an application would be able to authorize against concurrently with your own as well as other industry-standard servers...
In Code Complete (p. 411) Steve McConnell gives the following advice (which Bill Gates reads as a bedtime story in the Microsoft commercial).
"used in appropriate circumstances, table driven code is simpler than complicated logic, easier to modify, and more efficient."
"You can use a table to describe logic that's too dynamic to represent in code."
"The table-driven approach is more economical than the previous approach [rote object oriented design]"
Using a table based approach you can easily add new "users"(as in the modeling idea of a user/agent along with it's actions). Its a good way to avoid many "if"s. And I've used it before for situations like yours, and it's kept the code nice and tidy.

Resources