Can my unit test class keep track of test execution - integration-testing

Our code defines some "rules" in a List<Rule> collection. Each rule contains some logic in the form of a string. The rules are all passed to a "rule engine" along with some data. The rule engine sequentially evaluates the data against rules until it finds a rule which evaluates as true, then it returns that Rule.
We want to make to automatically test every rule. The tests will actually be integration tests, rather than unit tests, because they'll test the combination of the rule engine and the rules.
How do I write a test that says "make sure each rule evaluates as true in at least one unit test"?
I've figured out a way to run some fixture teardown code after all the tests have run (see https://xunit.github.io/docs/shared-context.html#class-fixture), and by using a static variable to record evaluated rules I can check during the teardown whether all the rules have been returned during unit tests. But this approach has the undesirable effect that it causes individual test to report as failed (in teardown) which didn't actually fail.

TLDR: control the test sequence. Since these are integration tests, that's not the big no-no it would be if they were unit tests.
I found that by using a static variable to record the rules that evaluated as true, and ensuring the have-all-rules-evaluated-as-true test last, as per the advice here https://hamidmosalla.com/2018/08/16/xunit-control-the-test-execution-order/, it becomes trivially easy to achieve what I needed.
In case the link dies, here's how to implement this approach
First, create an AlphabeticalOrderer implementing xUnit's ITestCaseOrderer:
public class AlphabeticalOrderer : ITestCaseOrderer
{
public IEnumerable<TTestCase> OrderTestCases<TTestCase>(IEnumerable<TTestCase> testCases)
where TTestCase : ITestCase
{
var result = testCases.ToList();
result.Sort((x, y) => StringComparer.OrdinalIgnoreCase.Compare(x.TestMethod.Method.Name, y.TestMethod.Method.Name));
return result;
}
}
Then use the newly-created attribute on your test class, and create a static variable to record progress:
[TestCaseOrderer("MyCompany.AlphabeticalOrderer", "MyTestsDLL")]
public class RulesTests
{
private static List<Rule> _untestedRules;
public RulesTests()
{
if (_untestedRules == null) // this will run once per [Fact]
{
_untestedRules = <some way of getting all the rules> // this will only run first time around
}
}
Each time a rule triggers, record which rule it was:
private void MarkRuleAsTested(LendingRule testedRule)
{
var rulesToRemove = _untestedRules.Where(r => r.Logic == testedRule.Logic).ToList();
foreach (var ruleToRemove in rulesToRemove)
{
_untestedRules.Remove(ruleToRemove);
}
}
Then the last test to run should check the collection:
[Fact]
public void ZZ_check_all_rules_have_been_triggered_by_other_tests()
{
// This test must run last because it validates that the
// OTHER tests taken together have resulted in each rule
// being evaluated as true at least once.
if (!_untestedRules.Any())
{
return;
}
var separator = "\r\n------------------------\r\n";
var untestedRules = string.Join(separator, _untestedRules.Select(ur => $"Logic: {ur.Logic}"));
throw new SomeRulesUntestedException($"The following rules have not been tested to resolve as True: {separator}{untestedRules}{separator}");
}
This results in a nicely-readable test failure.
You could equally start with an empty static collection of _testedRules and at the end compare that set to the full set of rules.

Related

Mock.Verify does not identify call made to mock service provided by IServiceProvider

I'm trying to write an integration test for a service method. The test compiles and runs without error, but it says that the number of calls that match the predicate are 0.
Test setup:
[TestCase]
public void Save_Submission_Processing_And_ClientGroupMapping_Type()
{
Mock<ISubmissionRepository> submissionRepositoryMock = new Mock<ISubmissionRepository>();
submissionRepositoryMock.Setup(x => x.GetOne(It.IsAny<Guid>())).Returns(QueryResult<Submission>.Ok(new Submission()));
IServiceCollection services = new ServiceCollection();
services.AddSingleton(x => submissionRepositoryMock.Object);
ClientGroupMappingService clientGroupMappingService = new ClientGroupMappingService(services.BuildServiceProvider());
clientGroupMappingService.ProcessClientGroupMappingImport(Guid.NewGuid());
submissionRepositoryMock.Verify(c => c.Save(It.Is<Submission>(d => d.SubmissionStatus == SubmissionStatus.Processing)), Times.Once);
}
Unit under test:
public class ClientGroupMappingService : IClientGroupMappingService
{
private readonly ISubmissionRepository _submissionRepository;
public ClientGroupMappingService(IServiceProvider serviceProvider)
{
_submissionRepository = serviceProvider.GetRequiredService<ISubmissionRepository>();
}
public void ProcessClientGroupMappingImport(Guid submissionID)
{
Submission submission = _submissionRepository.GetOne(submissionID).Value;
submission.SubmissionStatus = SubmissionStatus.Processing;
_submissionRepository.Save(submission);
// ..other stuff
}
}
Moq.MockException :
Expected invocation on the mock once, but was 0 times: c => c.Save(It.Is<Submission>(d => (int)d.SubmissionStatus == 2))
So Verify should see that the call was made to Save, and the param passed to Save matches the condition in the supplied predicate. My knee-jerk reaction is that once I pull the object out of the mock using submissionRepositoryMock.Object, I am no longer tracking the Mock, so calls to the Object are not going to register on the Mock. But if this is the case, what is the correct way to verify that my method made the required call?
The issue was in the "// ...other stuff" that I took out for brevity.
Later on in the method, the SubmissionStatus is updated again, and Mock.Verify only seems to be evaluating the REFERENCE to the object that was passed into the Save call as opposed to the VALUE, as all of the Invocations of Save show that it was called with SubmissionStatus.Success (which isn't actually true).
Hope this makes sense and helps anyone with a similar problem.

Making AutoMoq return Fixture-created values for methods

I'd like to explore wether we can save time by setting that all Moq-mocks created by AutoMoq should by default return Fixture-created values as method return values.
This would be beneficial when doing a test like the following:
[TestMethod]
public void Client_Search_SendsRestRequest()
var client = fixture.Create<Client>();
// Could be removed by implementing the mentioned functionality
Mock.Of(JsonGenerator).Setup(j => j.Search(It.IsAny<string>())).Returns(create("JsonBody")));
client.Search(fixture.Create("query"));
Mock.Of(client.RestClient).Verify(c => c.Execute(It.IsAny<RestRequest>()));
Mock.Of(client.RestClient).Verify(c => c.Execute(It.Is<RestRequest>(r => record(r.Body) == record(client.JsonGenerator.Search(query)))));
}
Note that the generated values must be cached inside (?) the proxies, we want the same value "frozen" in order to check. Also, setting up the mock with Setup should override the created value.
So, how can we modify AutoMoq mocks to do this?
A simple test verifying that it works could be:
[TestMethod]
public void MockMethodsShouldReturnCreatedValues()
{
Guid.Parse(new Fixture().Create<ITest>().Test());
}
public interface ITest
{
string Test();
}
Definitely possible, just use the AutoConfiguredMoqCustomization instead of the AutoMoqCustomization. The mocks will use the fixture to generate returns values for all its methods, properties and indexers (*).
Properties will be evaluated eagerly, whereas indexers/methods' return values will be evaluated and cached when invoked for this first time.
(*) There are two exceptions to this rule - the customization cannot automatically setup generic methods or methods with ref parameters, as explained here. You'll have to set those up manually, with the help of the .ReturnsUsingFixture method.

How To Ensure an #Test Method Always Runs Before Any Others Regardless of Class, Suite or Group?

My reading of the TestNG docs suggests that if I have a test method marked like this:
#BeforeSuite(alwaysRun = true)
#Test
public void MyTestMethod { ... }
then MyTestMethod would run before any other test defined anywhere, regardless of class, suite or group. But that does not seem to be the case.
Is there a way to make a test method run unconditionally before everything else? (And that if it fails no other tests will be run.)
Edit:
The test class:
class Suite_Setup
extends BaseTestSuite
{
#BeforeSuite(alwaysRun = true)
def setup() {
System.out.println("Conducting test suite setup...")
// Verify that the internal API is configured properly and that the API host is available...
new Action(ApiHostname, new BasicCookieStore)
}
}
Edit:
The answer:
We generate our own TestNG.xml files (automatically) and the #BeforeSuite method was not being included in it. Once it was included, #BeforeSuite had the expected effect.
However, it does appear that both #BeforeSuite (and presumably other #Before... and #After... annotations) can be mixed with #Test and rather than inhibiting the execution of the annotated method, they cause it to run more than once!
Also, I remiss in not indicating which version of TestNG I'm using. It's 6.2.
Try using groups either on class level or on method level.
In my case, I have a set of smoke tests that need to run before everything and if any of those fail, no other tests should run.
#Test(groups="smoke")
public class SmokeTests {
public void test1 {
//
}
...
}
And all other classes with tests are:
#Test(dependsOnGroups = "smoke")
public class OtherTests {
public void test1 {
//
}
...
}
I think that the answer is
to separate the configuration-methods from the test-methods,
to use #BeforeSuite for the method to be executed befor all tests in the suite (for example to get an EntityManager)
to use dependsOnMethods to decide in which order the tests shall be run
In the following example the testRemove will run after testInsert, testReadAll and testUpdate have run.
testReadAll will run after testInsert has run.
testUpdate will run after testInsert has run.
Note that there is no dependency between testReadAll and testUpdate.
#Test
public void testInsert()
{..}
#Test(dependsOnMethods={"testInsert"})
public void testUpdate()
{..}
#Test(dependsOnMethods={"testInsert"})`
public void testReadAll()`
{..}
#Test(dependsOnMethods={"testInsert", "testUpdate", "testReadAll"})`
public void testRemove()`
{..}
Remove #Test, a method cannot be both a configuration and a test method.
#Test (alwaysRun=True)
makes the test always run irrespective of methods or groups it depends on and even it is failed
Is there a way to mark a test method so it will unconditionally be run before everything else?
Just try to assign to target test the lowest priority value.
#Test(priority = -2147483648)
public void myTest() {
...
}
You can read more about TestNG test priority over there.
And that if it fails no other tests will be run.
You need to make other tests depending on this test method by using one of the following options:
Assign to your first method some group and mark other tests by dependsOnGroup.
Mark other tests by dependsOnMethod.
If you will use one of the dependency options you do not need to provide priority for your first method.

Does mocking affect your assertion count?

I'm noticing when I use mock objects, PHPUnit will correctly report the number of tests executed but incorrectly report the number of assertions I'm making. In fact, every time I mock it counts as another assertion. A test file with 6 tests, 7 assert statements and each test mocking once reported 6 tests, 13 assertions.
Here's the test file with all but one test removed (for illustration here), plus I introduced another test which doesn't stub to track down this problem. PHPUnit reports 2 tests, 3 assertions. I remove the dummy: 1 test, 2 assertions.
require_once '..\src\AntProxy.php';
class AntProxyTest extends PHPUnit_Framework_TestCase {
const sample_client_id = '495d179b94879240799f69e9fc868234';
const timezone = 'Australia/Sydney';
const stubbed_ant = "stubbed ant";
const date_format = "Y";
public function testBlankCategoryIfNoCacheExists() {
$cat = '';
$cache_filename = $cat.'.xml';
if (file_exists($cache_filename))
unlink($cache_filename);
$stub = $this->stub_Freshant($cat);
$expected_output = self::stubbed_ant;
$actual_output = $stub->getant();
$this->assertEquals($expected_output, $actual_output);
}
public function testDummyWithoutStubbing() {
$nostub = new AntProxy(self::sample_client_id, '', self::timezone, self::date_format);
$this->assertTrue(true);
}
private function stub_FreshAnt($cat) {
$stub = $this->getMockBuilder('AntProxy')
->setMethods(array('getFreshAnt'))
->setConstructorArgs(array(self::sample_client_id, $cat, self::timezone, self::date_format))
->getMock();
$stub->expects($this->any())
->method('getFreshAnt')
->will($this->returnValue(self::stubbed_ant));
return $stub;
}
}
It's like an assert has been left in one of the framework's mocking methods. Is there a way to show every (passing) assertion being made?
After each test method completes, PHPUnit verifies the mock expectations setup during the test. PHPUnit_Framework_TestCase::verifyMockObjects() increments the number of assertions for each mock object created. You could override the method to undo this if you really want by storing the current number of assertions, calling the parent method, and subtracting the difference.
protected function verifyMockObjects()
{
$count = $this->getNumAssertions();
parent::verifyMockObjects();
$this->addToAssertionCount($count - $this->getNumAssertions());
}
Of course, verifyMockObjects() will throw an assertion failure exception if any expectation is unsatisfied, so you'll need to catch the exception and rethrow it after resetting the count. I'll leave that to you. :)

Rational Functional Tester - How can I get scripts called from a parent script to use the parent's data pool?

I'm fairly new to Rational Functional Tester (Java) but I have one large blank. I have an application that is in an agile development environment so some of the screens can flux as new interfaces are brought online.
For this reason I'm trying to modularize my test scripts. For example: I would like to have a login script, a search script, and a logout script.
I would then stitch these together (pseudo code)
Call Script components.security.Login;
Call Script components.search.Search;
//verification point
Call Script components.security.Logout;
By breaking the testing script into discrete chunks (functional units) I believe that I would be better able to adapt to change. If the login script changed, I would fix or re-record it once for every script in the application.
Then I would call that script, say, "TestSituation_001". It would have need to refer to several different data pools. In this instance a User datapool (instead of a superUser datapool) and a TestSituation_001 datapool, or possibly some other datapools as well. The verfication point would use the situational datapool for its check.
Now, this is how I would do it in an ideal world. What is bothering me at the moment is that it appears that I would need to do something entirely different in order to get the child scripts to inherit the parents.
So my questions are these:
Why don't child scripts just inherit the calling script's data pool?
How can I make them do it?
Am I making poor assumptions about the way this should work?
If #3 is true, then how can I do better?
As a side note, I don't mind hacking the heck out of some Java to make it work.
Thanks!
I solved my own problem. For those of you who are curious, check this out:
public abstract class MyTestHelper extends RationalTestScript
{
protected void useParentDataPool() {
if(this.getScriptCaller() != null) {
IDatapool dp = this.getScriptCaller().getDatapool();
IDatapoolIterator iterator = DatapoolFactory.get().open(dp, "");
if(dp != null && iterator != null) {
//if the datapool is not null, substitute it for the current data pool
this.dpInitialization(dp, iterator);
}
}
}
}
This will use the same iterator too. Happy hunting...
Actually, after some reflection, I made a method that would make any given script use the Root calling script's DataPool. Again, happy hunting to those who need it...
/*
* preconditions: there is a parent caller
* postconditions: the current script is now using the same datapool / datapool iterator as the root script
*/
protected void useRootDataPool() {
//if there is no parent, then this wouldn't work so return with no result;
if(this.getScriptCaller() == null) return;
//assume that we're at the root node to start
RationalTestScript root = this;
while(root.getScriptCaller() != null) {
root = root.getScriptCaller();
}
//if this node is the root node, no need to continue. the default attached datapool will suffice.
if(this.equals(root)) return;
//get the root's data pool (which would be the parent's parent and so on to the topmost)
IDatapool dp = root.getDatapool();
if(dp != null) {
//check to make sure that we're not trying to re-initialize with the same datapool (by name)
//if we are, then leave
if(dp.getName().equals(this.getDatapool().getName())) return;
//this basically says "give me the iterator already associated to this pool"
IDatapoolIterator iterator = DatapoolFactory.get().open(dp, "");
//if we have an iterator AND a data pool (from above), then we can initialize
if(iterator != null) {
//this method is never supposed to be run, but this works just fine.
this.dpInitialization(dp, iterator);
//log information
logInfo("Using data pool from root script: " + root.getScriptName());
}
}
}

Resources