Separating data and test settings in QTests - qt

I am currently using following pattern when creating tests with QTest.
One test class per production class.
If a class has some 'global' setting run the test class multiple times with each such setting.
Each production class method has one test method.
Each test method has _data method.
Each _data method specify settings and data to be used and names the cases.
This last point somewhat bothers me because I am not passing just data but also data to be used for initialising that particular test. Sometimes it looks weird and even though my tests are short they are not all that intuitive because of the initialisation logic.
The alternative pattern I know of is to split each test method (breaking my rule #3) based on this initialisation needs. On one hand it would eliminate a lot of _data test methods but it would also make the test classes much bigger and no longer easily relatable to the production class (the naming would help though). Most google tests are written like this.
Another alternative would be to use global state of the object much like I treat global settings. If the object is either valid or invalid then it would not be part of each _data method but rather setting of the test class that would run in either configuration.
My main concern is maintainability. With my current approach I sometimes struggle to understand the nuances of the settings I pass to the tests and I need some sensible way to separate them and not to burden myself even more by it.

For global settings you run the test class multiple times, so IMHO doing the same for local settings doesn't really "violate" your rule #3, it is more an extension of rule #2.
Alternatively you could make the initialization routine another thing that is part of the test data.
Something like
private slots:
void someMethodTest_data()
{
QTest::addColumn<QByteArray>("settings");
//....
QTest::addRow("case1") << "settings1" << ....
}
void someMethodTest()
{
Q_FETCH(QByteArray, settings);
const QByteArray initMethod = QTest::currentTestFuntion() + "_init_" + settings;
QMetaObject::invokeMethod(this, initMethod.constData(), Qt::DirectConnect);
// commence test
}
protected slots:
void someMethodTest_init_settings1();

Related

Is there any way to get the c# object/data on which NUnit test is failing?

I am writing unit tests for a complex application which has so many rules to be checked into a single flow by using NUnit and Playwright in .Net5. Actually the case is, to save the time for writing the test scripts for Playwright (front-end testing tool), we have used a library named Bogus to create dummy data dynamically based on the rules (because the test cases has numerous rules to be checked and it was much more difficult to write fresh data to every case). I am using Playwright script into the NUnit test and providing the data source by using [TestCaseSource("MethodName")] to provide dynamic data object for different cases.
Now, we are facing a problem that some of the tests cases get passed and some are failed and we are unable to identify that particularly which test case is causing the problem because the testcase data is being provided by the dynamic source and in that source the data is being generated by the Bogus library on the bases of the rules which we have generated. Plus, we cannot look at the tests for a long time that's why we have automated the process.
[Test]
[TestCaseSource("GetDataToSubmit")]
public async Task Test_SubmitAssignmentDynamicFlow(Assignment assignment)
{
using var playwright = await Playwright.CreateAsync();
await using var browser = await playwright.Chromium.LaunchAsync(new BrowserTypeLaunchOptions
{
Headless = false,
...
});
....
private static IEnumerable<TestCaseData> GetDataToSubmit()
{
//creating data for simple job
var simpleAssignment = new DummyAssigmentGenerator()
....
.Generate();
yield return new TestCaseData(simpleAssignment);
....
Now, my question is, is there any way so that we can view that what were the actual values in the object in the failed case, when we see the whole report of the testcases? So that we can come to know that which certain values are causing problems and eventually fixed those.
Two approaches...
Assuming that DummyAssignmentGenerator is your own class, override its ToString() method to display whatever you would like to see. That string will become part of the name of the test case generated, like...
Test_SubmitAssignmentDynamicFlow(YOUR_STRING)
Apply a name to each TestCaseData item you yield using the SetName() fluent method. In that case, you are supplying the full display name of the test case, not just the part in parentheses. Use {m}(YOUR_STRING) in order to have it appear the same as in the first approach.
If you can use it, the first approach is clearly the simpler of the two.

Pattern for async testing with scalatest and mocking objects with mockito

I am writing unit tests for some async sections of my code (returning Futures) that also involves the need to mock a Scala object.
Following these docs, I can successfully mock the object's functions. My question stems from the fact that withObjectMocked[FooObject.type] returns Unit, where async tests in scalatest require either an Assertion or Future[Assertion] to be returned. To get around this, I'm creating vars in my tests that I reassign within the function sent to withObjectMocked[FooObject.type], which ends up looking something like this:
class SomeTest extends AsyncWordSpec with Matchers with AsyncMockitoSugar with ResetMocksAfterEachAsyncTest {
"wish i didn't need a temp var" in {
var ret: Future[Assertion] = Future.failed(new Exception("this should be something")) // <-- note the need to create the temp var
withObjectMocked[SomeObject.type] {
when(SomeObject.someFunction(any)) thenReturn Left(Error("not found"))
val mockDependency = mock[SomeDependency]
val testClass = ClassBeingTested(mockDependency)
ret = testClass.giveMeAFuture("test_id") map { r =>
r should equal(Error("not found"))
} // <-- set the real Future[Assertion] value here
}
ret // <-- finally, explicitly return the Future
}
}
My question then is, is there a better/cleaner/more idiomatic way to write async tests that mock objects without the need to jump through this bit of a hoop? For some reason, I figured using AsyncMockitoSugar instead of MockitoSugar would have solved that for me, but withObjectMocked still returns Unit. Is this maybe a bug and/or a candidate for a feature request (the async version of withObjectMocked returning the value of the function block rather than Unit)? Or am I missing how to accomplish this sort of task?
You should refrain from using mockObject in a multi-thread environment as it doesn't play well with it.
This is because the object code is stored as a singleton instance, so it's effectively global.
When you use mockObject you're efectibly forcefully overriding this var (the code takes care of restoring the original, hence the syntax of usign it as a "resource" if you want).
Because this var is global/shared, if you have multi-threaded tests you'll endup with random behaviour, this is the main reason why no async API is provided.
In any case, this is a last resort tool, every time you find yourself using it you should stop and ask yourself if there isn't anything wrong with your code first, there are quite a few patterns to help you out here (like injecting the dependency), so you should rarely have to do this.

How to manipulate the application configs for controller tests?

I'm writing functional / controller tests for a ZF3 application (driven by PHPUnit and zendframework/zend-test). Like this:
public function testWhatEver()
{
$this->dispatch('/');
$this->assertResponseStatusCode(Response::STATUS_CODE_200);
}
It's working pretty well. But now I got a case, where I need to test the application with multiple mutually exclusive configs.
E.g., the case "authentication": The application provides multiple authentication methods (let's say: AuthA, AuthB,AuthC). (That is configurable via setting of the auth.type's value in the config file.) I want to test each of them. That means, it's not enough to have special test configs in the /config/autoload/test/*{local|global}.php. I need to be able to manipulate them for every test (before I call the dispatch(...) method).
How to manipulate the application configs for / from controller tests (on the fly)?
If no better solution can be found, a possible workaround might be to edit the config file (by using file_put_contents(...) or something like this) before every test. But it's a bit ugly (and slow).
In general I see no really nice solution for this problem. But there some more or less acceptable workaround:
Workaround 1: manipulating the according config file for every test
$configs = file_get_contents(...)
searchByRegexAndManipulateConfigs(...)
file_put_contents(...)
It's much effort and would make the testing slower (due to reading from / writing to the filesystem).
Workaround 2: simple files with only one config value
We can create files like config.auth.type.php or config.auth.type.txt (one per config value t keep the file really simple) and to use inclue or file_get_contents(...) call as value in the config. The the value in the file needs to be manipulated before the test execution.
It's a bit less effort (we don't need to write complex RegEx), but might make the test considerably slower, since every application request would start by reading an additional file.
Workaround 3: passing configs values through GLOBALS
It's the simplest and fastest variant. We just save the needed value into a global variable and read it in the config (file's) array. After the test we remove the variable:
AuthBTest
...
protected function setUp() // or setUpBeforeClass()
{
parent::setUp();
$GLOBALS['appTestConfigs']['auth.type'] = 'AuthA';
}
protected function tearDown() // or tearDownAfterClass()
{
parent::tearDown();
unset($GLOBALS['appTestConfigs']);
}
...
/config/autoload/test/local.php
return [
'auth' => [
'type' => isset($GLOBALS['appTestConfigs']['auth.type']) ? $GLOBALS['appTestConfigs']['auth.type'] : 'AuthA',
],
];

how to unit testing cache layer

I have added a cache layer to my project . now I wonder if I could unit test methods that manipulate cache ? or is there a better way to test Layer's logic ?
I just want to check the process , for example :
1- when the item is not in the cache , method should hit the database
2- the next time method should use cache
3- when a change is made to database , cache should be cleared
4- if data retrieved from databse is null , it shouldn't be added to cache
I want to ensure that the logic I have placed into the methods are working as expected .
I'm presuming the cache is a third party cache? If so, I would not test it. You're testing someone else's code otherwise.
If this caching is so important you need to test it, I'd go with an integration or acceptance test. In other words, hit the page(s)/service(s) in question and check the content that way. By the very definition of what you wish to test, this is not a unit test.
On the flip side, if the cache is one you've rolled yourself, you'll easily be able to unit test the functionality. You might want to check out verification based testing in order to test the behavior of the cache, as apposed to actually checking stuff is added/removed from the cache. Check out mocking for ways to achieve this.
To test for behaviour via Mock objects (or something similar) I'd do the following - although your code will vary.
class Cacher
{
public void Add(Thing thing)
{
// Complex logic here...
}
public Thing Get(int id)
{
// More complex logic here...
}
}
void DoStuff()
{
var cacher = new Cacher();
var thing = cacher.Get(50);
thing.Blah();
}
To test the above method I'd have a test which used a mock Cacher. You'd need to pass this into the method at runtime or inject the dependency into the constructor. From here the test would simply check that cache.Get(50) is invoked. Not that the item is actually retrieved from the cache. This is testing the behavior of how the cacher should be used, not that it is actually caching/retrieving anything.
You could then fall back to state based testing for the Cacher in isolation. E.g you add/remove items.
Like I said previously, this may be overkill depending on what you wish to do. However you seem pretty confident that the caching is important enough to warrant this sort of testing. In my code I try to limit mock objects as much as possible, though this sounds like a valid use case.

why would I unit test business layer in MVP

i created the following sample method in business logic layer. my database doesn't allow nulls for name and parent columns:
public void Insert(string catName, long catParent)
{
EntityContext con = new EntityContext();
Category cat = new Category();
cat.Name = catName;
cat.Parent = catParent;
con.Category.AddObject(cat);
con.SaveChanges();
}
so i unit test this and test for empty name and empty parent will fail. to get around that issue i have to refactor the Insert mathod as following:
public void Insert(string catName, long catParent)
{
//added to pass the test
if(string.IsNullOrEmpty(catName)) throw new InvalidOperationException("wrong action. name is empty.");
long parent;
if(long.TryParse(catParent, out parent) == false) throw new InvalidOperationException("wrong action. parent didn't parsed.");
//real bussiness logic
EntityContext con = new EntityContext();
Category cat = new Category();
cat.Name = catName;
cat.Parent = parent;
con.Category.AddObject(cat);
con.SaveChanges();
}
my entire bussiness layer are simple calls to database. so now i'm validating the data again! i already planned to do my validation in UI and test that kind of stuff in UI test units. what should i test in my bussiness logic method other than validation related tasks? and if there is nothing to be unit tested why everybody says "unit test all the layers" and things like that which i found a lot online?
The techniques involved in testing are those that you break down your program into smaller parts (smaller components or even classes) and test those small parts. As you assemble those parts together, you make less comprehensive tests -- the smaller parts are already proven to work -- until you have a functional, tested program, which then you give to users for "user tests".
It's preferable to test smaller parts because:
It's simpler to write the tests. You'll need less data, you only setup one object, you have to inject less dependencies.
It's easier to figure out what to test. You know the failing conditions from a simple reading of the code (or, better yet, from the technical specification).
Now, how can you guarantee that you business layer, simple as it's, is correctly implemented? Even a simple database insert can fail if badly written. Besides, how can you protected yourself from changes? Right know, the code works, but what will happen in the future if the database is changed or someone update the business logic.
However, and this is important, you actually don't need to test everything. Use your intuition (which is also called experience) to understand what needs testing and what doesn't. If you method is simple enough, just make sure the client code is correctly tested.
Finally, you've said that all your validation will occur in the UI. The business layer should be able to validate the data in order to increase reuse in your application. Fail to do that and whenever you or whoever make changes in your code in the future might create new UI and forget to add the required validations.

Resources