Best way to replace Drools Rules - rules

I have a system where 'dynamic logic' is implemented as Drools Rules, using Rules engine.
For each client implementation, custom pricing and tax calculation logic is implementted using the drl files for that specific implementation.
rule 'abc'
when
name = 'X'
then
price= '12'
end
And one rule's condition is dependent on what's set in the previous rules, so there is basically rules transition.
This is really painful as the drools rules are not sequential programming and is not developer friendly. There get introduced lots of bugs due to mis-interpretation of how drools evaluates.
Are there a better 'java/groovy' alternative that could easily replace it?

I think the answer is going to depend on what you ultimately want the end solution to be. If you are wanting to pull your business rules out of a rules engine and put them into java/groovy, that is very different from wanting to pull them out of one rules engine and into another.
Your questions seems to lean towards the prior, so I'll address that. Be very careful with this approach. The previous individuals who implemented this appear to have done this the proper way with respect to using the Rete algorithm, as it sounds like the firing of one rule can execute other rules, this is good business rules - they aren't sequential they are declarative. Remember that imperative software is written for engineers mostly, it doesn't map back to the real world 100% of the time :)
If you put want to move this into java/groovy you are moving into an imperative language, which could put you into an if/then/else hell. I would suggest the following:
Isolate this code away from the rest of your codebase - you will be doing a lot of maintenance on this code in the future when the business changes their rules. Good interface design and encapsulation here will payoff big time down the road.
Develop some type of DSL with your business customer so when they say something like "Credit Policy", you know exactly what they are referring to and can change the related rules appropriately.
Unit Test, Unit Test, Unit Test. This applies to your current configuration too. If you are finding bugs now, why aren't your tests? It doesn't take long to setup junit to create an object and invoke your Drools engine and test the response. If you add some loops to test ranges of variables that expect the same response you can be in the hundreds of thousands of tests in no time.
As an aside: If you don't want to go down this route, then I highly suggest getting some training on Drools so that you understand the engine and Rete if you don't already. There are some big wins you can make with your customers if you are able to quickly translate their rules into implementable software.

Related

How to approach writing developer tests (unit tests, integration tests, etc) for a system?

I have a WCF service which runs and interacts with database, file system and few external web services, then creates the result and Xml Serialize it and returns it finally.
I'd like to write tests for this solution and I'm thinking how (it's all using dependency injection and design by contract).
There are 3 main approaches I can take.
1) I can pick smallest units of codes/methods and write tests for it. Pick one class and isolate it from its dependencies (other classes, etc). Although it guarantees quality but it takes lots of time writing them and that's slow.
2) Only make the interaction with external systems mockable and write some tests that cover the main scenarios from when the request is made until the response is serialized and returned. This will test all the interactions between my classes but mocks all external resource accesses.
3) I can setup a test environment where the interaction with external web services do happen, file access happens, database access happens, etc. Then writing the tests from end to end. this requires environmental setup and dependency on all other systems to be up and running.
About #1, I see no point in investing the time/money/energy on writing the tests for every single method or codes that I have. I mean it's a waste of time.
About #3, since it has dependency on external resources/systems, it's hard to set it up and running.
#2, sounds to be the best option to me. Since it will test what it should be testing. Only my system and all its classes and mocking all other external systems.
So basically, my conclusion after some years experience with unit tests is that writing unit tests is a waste to be avoided and instead isolated system tests are best return on investment.
Even if I was going to write the tests first (TDD) then the production code, still #2 I think would be best.
What's your view on this? would you write small unit tests for your application? would you consider it a good practice and best use of time/budget/energy?
If you want to talk about quality, you should have all 3:
Unit tests to ensure your code does what you think it does, expose any edge cases and help with regression. You (developer) should write such tests.
Integration tests to verify correctness of entire process, whether components talk to each other correctly and so on. And again, you as a developer write such tests.
System-wide tests in production-like environment (with some limitations naturally - you might not have access to client database, but you should have its exact copy on your local machines). Those tests are usually written by dedicated testers (often in programming languages different from application code), but of course can be written by you.
Second and third type of tests (integration and system) will be way too much effort to test edge cases of smaller components. This is what you usually want unit tests for. You need integration because something might fail on hooking-up of tested, verified and correct modules. And of course system tests is what you do daily, during development, or have assigned people (manual testers) do it.
Going for selected type of tests from the list might work to some point, but is far from complete solution or quality software.
All 3 are important and targeted at different test types that is a matrix of unit/integration/system categories with positive and negative testing in each category.
For code coverage Unit testing will yield the highest percentage, followed by Integration then System.
You also need to consider whether or not the purpose of the test is Validation (will meet the final user\customer requirements, i.e Value) or Verification (written to specification, i.e. Correct).
In summary the answer is 'it depends', and I would recommend following the SEI CMMi model for Verification and Validation (i.e. testing) which begins with the goals (value) of each activity then subjecting that activity to measures that will ultimately allow the whole process to be subjected to continuous improvement. In this way you have isolated the What and Why from the How and you will be able to answer time and value type questions for your given environment (which could be a Life support System or a Tweet of the day, to your favorite Aunt, App).
Summary: #2 (integration testing) seems most logical, but you shouldn't hesitate to use a variety of tests to achieve the best coverage for pieces of your codebase that need it most. Shooting for having tests for "everything" is not a worthy goal.
Long version
There is a school of thought out there where devs are convinced that adopting unit\integration\system tests means striving for every single chuck of code being tested. It's either no test coverage at all, or committing to testing "everything". This binary thinking always makes adopting any kind of testing strategy seem very expensive.
The truth is, forcing every single line of code\function\module to be tested is about as sound as writing all your code to be as fast as possible. It takes too much time and effort, and most of it nets very little return. Another truth is that you can never achieve true 100% coverage in a non-trivial project.
Testing is not a goal unto itself. It's a means to achieve other things: final product quality, maintainability, interoperability, and so on, all while expending the least amount of effort possible.
With that in mind, step back and evaluate your particular circumstances. Why do you want to "write tests for this solution"? Are you unhappy with the overall quality of the project today? Have you experienced high regression rates? Are you perhaps unsure about how some module works (and more importantly, what bugs it might have)? Regardless of what your exact goal is, you should be able to select pieces that pose particular challenges and focus your attention on them. Depending on what those pieces are, an appropriate testing approach can be selected.
If you have a particularly tricky function or a class, consider unit testing them. If you're faced with a complicated architecture with multiple, hard to understand interactions, consider writing integration tests to establish a clean baseline for your trickiest scenarios and to better understand where the problems are coming from (you'll probably flush out some bugs along the way). System testing can help if your concerns are not addressed in more localized tests.
Based on the information you provided for your particular scenario, external-facing unit testing\integration testing (#2) looks most promising. It seems like you have a lot of external dependencies, so I'd guess this is where most of the complexity hides. Comprehensive unit testing (#1) is a superset of #2, with all the extra internal stuff carrying questionable value. #3 (full system testing) will probably not allow you to test external edge cases\error conditions as well as you would like.

Restrict violation of architecture - asp.net MVP

If we had a defined hierarchy in an application. For ex a 3 - tier architecture, how do we restrict subsequent developers from violating the norms?
For ex, in case of MVP (not asp.net MVC) architecture, the presenter should always bind the model and view. This helps in writing proper unit test programs. However, we had instances where people directly imported the model in view and called the functions violating the norms and hence the test cases couldn't be written properly.
Is there a way we can restrict which classes are allowed to inherit from a set of classes? I am looking at various possibilities, including adopting a different design pattern, however a new approach should be worth the code change involved.
I'm afraid this is not possible. We tried to achieve this with the help of attributes and we didn't succeed. You may want to refer to my past post on SO.
The best you can do is keep checking your assemblies with NDepend. NDepend shows you dependancy diagram of assemblies in your project and you can immediately track the violations and take actions reactively.
(source: ndepend.com)
It's been almost 3 years since I posted this question. I must say that I have tried exploring this despite the brilliant answers here. Some of the lessons I've learnt so far -
More code smell come out by looking at the consumers (Unit tests are best place to look, if you have them).
Number of parameters in a constructor are a direct indication of number of dependencies. Too many dependencies => Class is doing too much.
Number of (public) methods in a class
Setup of unit tests will almost always give this away
Code deteriorates over time, unless there is a focused effort to clear technical debt, and refactoring. This is true irrespective of the language.
Tools can help only to an extent. But a combination of tools and tests often give enough hints on various smells. It takes a bit of experience to catch them in a timely fashion, particularly to understand each smell's significance and impact.
You are wanting to solve a people problem with software? Prepare for a world of pain!
The way to solve the problem is to make sure that you have ways of working with people that you don't end up with those kinds of problems.... Pair Programming / Review. Induction of people when they first come onto the project, etc.
Having said that, you can write tools that analyse the software and look for common problems. But people are pretty creative and can find all sorts of bizarre ways of doing things.
Just as soon as everything gets locked down according to your satisfaction, new requirements will arrive and you'll have to break through the side of it.
Enforcing such stringency at the programming level with .NET is almost impossible considering a programmer can access all private members through reflection.
Do yourself and favour and schedule regular code reviews, provide education and implement proper training. And, as you said, it will become quickly evident when you can't write unit tests against it.
What about NetArchTest, which is inspired by ArchUnit?
Example:
// Classes in the presentation should not directly reference repositories
var result = Types.InCurrentDomain()
.That()
.ResideInNamespace("NetArchTest.SampleLibrary.Presentation")
.ShouldNot()
.HaveDependencyOn("NetArchTest.SampleLibrary.Data")
.GetResult()
.IsSuccessful;
// Classes in the "data" namespace should implement IRepository
result = Types.InCurrentDomain()
.That().HaveDependencyOn("System.Data")
.And().ResideInNamespace(("ArchTest"))
.Should().ResideInNamespace(("NetArchTest.SampleLibrary.Data"))
.GetResult()
.IsSuccessful;
"This project allows you create tests that enforce conventions for class design, naming and dependency in .Net code bases. These can be used with any unit test framework and incorporated into a build pipeline. "

Abstraction or not?

The other day i stumbled onto a rather old usenet post by Linus Torwalds. It is the infamous "You are full of bull****" post when he defends his choice of using plain C for Git over something more modern.
In particular this post made me think about the enormous amount of abstraction layers that accumulate one over the other where I work. Mine is a Windows .Net environment. I must say that I like C# and the .Net environment, it really makes most things easy.
Now, I come from a very different background made of Unix technologies like C and a plethora or scripting languages; to me, also, OOP is just one, and not always the best, programming paradigm.. I often struggle (in a working kind of way, of course!) with my colleagues (one in particular), because they appear to be of the "any problem can be solved with an additional level of abstraction" church, while I'm more of the "keeping it simple" school. I think that there is a very different mental approach to the problems that maybe comes from the exposure to different cultures.
As a very simple example, for the first project I did here I needed some configuration for an application. I made a 10 rows class to load and parse a txt file to be located in the program's root dir containing colon separated key / value pairs, one per row. It worked.
In the end, to standardize the approach to the configuration problem, we now have a library to be located on every machine running each configured program that calls a service that, at startup, loads up an xml that contains the references to other xmls, one per application, that contain the configurations themselves.
Now, it is extensible and made up of fancy reusable abstractions, providers and all, but I still think that, if we one day really happen to reuse part of it, with the time taken to make it up, we can make the needed code from start or copy / past the old code and modify it.
What are your thoughts about it? Can you point out some interesting reference dealing with the problem?
Thanks
Abstraction makes it easier to construct software and understand how it is put together, but it complicates fully understanding certain issues around performance and security, because the abstraction layers introduce certain kinds of complexity.
Torvalds' position is not absurd, but he is an extremist.
Simple answer: programming languages provide data structures and ways to combine them. Use these directly at first, do not abstract. If you find you have representation invariants to maintain that are at a high risk of being broken due to a large number of usage sites possibly outside your control, then consider abstraction.
To implement this, first provide functions and convert the call sites to use them without hiding the representation. Hide the data representation only when you're satisfied your functional representation is sufficient. Make sure at this time to document the invariant being protected.
An "extreme programming" version of this: do not abstract until you have test cases that break your program. If you think the invariant can be breached, write the case that breaks it first.
Here's a similar question: https://stackoverflow.com/questions/1992279/abstraction-in-todays-languages-excited-or-sad.
I agree with #Steve Emmerson - 'Coders at Work' would give you some excellent perspective on this issue.

Backwards compatibility vs. standards compliancy?

Say you got an old codebase that you need to maintain and it's clearly not compliant with current standards. How would you distribute your efforts to keep the codebase backwards compatible while gaining standards compliance? What is important to you?
At my workplace we don't get any time to refactor things just because it makes the code better. There has to be a bug or a feature request from the customer.
Given that rule I do the following:
When I refactor:
If I have to make changes to an old project, I cleanup, refactor or rewrite the part that I'm changing.
Because I always have to understand the code to make the change in the first place, that's the best time to make other changes.
Also this is a good time to add missing unit tests, for your change and the existing code.
I always do the refactoring first and then make my changes, so that way I'm much more confident my changes didn't break anything.
Priorities:
The code parts which need refactoring the most, often contain the most bugs. Also we don't get any bug reports or feature requests for parts which are of no interest to the customer. Hence with this method prioritization comes automatically.
Circumenventing bad designs:
I guess there are tons of books about this, but here's what helped me the most:
I build facades:
Facades with a new good design which "quarantine" the existing bad code. I use this if I have to write new code and I have to reuse badly structured existing code, which I can't change due to time constraints.
Facades with the original bad design, which hide the new good code. I use this if I have to write new code which is used by the existing code.
The main purpose of those facades is dividing good from bad code and thus limiting dependencies and cascading effects.
I'd factor in time to fix up modules as I needed to work with them. An up-front rewrite/refactor isn't going to fly in any reasonable company, but it'll almost definitely be a nightmare to maintain a bodgy codebase without cleaning it up somewhat.

Refactoring for Testability on an existing system

I've joined a team that works on a product. This product has been around for ~5 years or so, and uses ASP.NET WebForms. Its original architecture has faded over time, and things have become relatively disorganized throughout the solution. It's by no means terrible, but definitely can use some work; you all know what I mean.
I've been performing some refactorings since coming on to the project team about 6 months ago. Some of those refactorings are simple, Extract Method, Pull Method Up, etc. Some of the refactorings are more structural. The latter changes make me nervous as there isn't a comprehensive suite of unit tests to accompany every component.
The whole team is on board for the need to make structural changes through refactoring, but our Project Manager has expressed some concerns that we don't have adequate tests to make refactorings with the confidence that we aren't introducing regression bugs into the system. He would like us to write more tests first (against the existing architecture), then perform the refactorings. My argument is that the system's class structure is too tightly coupled to write adequate tests, and that using a more Test Driven approach while we perform our refactorings may be better. What I mean by this is not writing tests against the existing components, but writing tests for specific functional requirements, then refactoring existing code to meet those requirements. This will allow us to write tests that will probably have more longevity in the system, rather than writing a bunch of 'throw away' tests.
Does anyone have any experience as to what the best course of action is? I have my own thoughts, but would like to hear some input from the community.
Your PM's concerns are valid - make sure you get your system under test before making any major refactorings.
I would strongly recommend getting a copy of Michael Feather's book Working Effectively With Legacy Code (by "Legacy Code" Feathers means any system that isn't adequately covered by unit tests). This is chock full of good ideas for how to break down those couplings and dependencies you speak of, in a safe manner that won't risk introducing regression bugs.
Good luck with the refactoring programme; in my experience it's an enjoyable and cathartic process from which you can learn a lot.
Can you re-factor in parallel? What I mean is re-write the pieces you want to refactor using TDD, but leave the existing code base in place. Then phase out the existing code when your new tests meet the needs for your PM?
I would also like to throw in a suggestion to visit the Refactoring website by Martin Fowler. He literally wrote the book on this stuff.
As far as introducing unit tests into the equation the best method I have found is to find a top level component and identify all the external dependencies it has on concrete objects and replace them with interfaces. Once you've done that it will be a lot easier to write unit tests against your code base and you can do it one component at a time. Even better, you won't have to throw away any unit tests.
Unit testing ASP.Net can be tricky, but there are plenty of frameworks that make it easier to do. ASP.Net MVC, and WCSF to name a few.
Just tossing out a second recommendation for Working Effectively with Legacy Code, an excellent book that really opened my eyes to the fact that almost any old / crappy / untestable code can be wrangled!
Totally agree with the answer from Ian Nelson. Additionally I would start to get some "high level" tests (functional or component tests) in place to preserve the behaviour from the view point of the user. This point might be the most important concern for your PM.

Resources