How can I reuse containers during integration tests with quarkus? - integration-testing

I currently have a few integration tests that are running fine, however I'm using the annotation #QuarkusTestResource to launch containers via testcontainers before quarkus is launched and I adapt my properties to the random test container port by overriding the start method of my container class extending QuarkusTestResourceLifecycleManager.
To optimize a little bit I'd like for example to reuse my container that launches kafka since it takes 6-8 seconds to start it and reuse it for several of my integration tests. I've not been able to do so. Since Quarkus manages the lifecycle every times it stops, between every test class, it also stops every container. I tried mixing singleton containers from testcontainers with the Quarkus test resources but it doesn't work. Here's a snippet of the start of one of my integration test class :
#QuarkusTest
#Tag(INTEGRATION_TEST)
#QuarkusTestResource(value = KafkaNode.class, restrictToAnnotatedClass = true, parallel = true)
class MyIntegrationTest { ... }
And my KafkaNode class :
public class KafkaNode implements QuarkusTestResourceLifecycleManager {
static KafkaContainer kafka = new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:6.2.1"))
.withNetworkAliases("kafkaNode")
.withNetwork(CommonNetwork.getInstance());
#Override
public Map<String, String> start() {
kafka.start();
return Collections.singletonMap("myapp.kafka.servers",
kafka.getBootstrapServers());
}
#Override
public void stop() {
kafka.close();
}
}
I'm open to ideas to reuse of even rework my tests so that I can reuse containers in another way.

You can use the experimental reusable mode for your container via .withReuse(true).
Please note that in order for this to work you need to:
set the testcontainers.reuse.enable=true property in the ~/.testcontainers.properties file
NOT call .close() (which programmatically stops the container in any way)
Once you've setup everything accordingly your container should
not be stopped/garbage collected at the end of the test run anymore
be picked up again at the next test run - avoiding a new container start
Please note that all runtime modifications (configuration, data, topics etc.) will still be there at the next test run. You need to take suitable measures to avoid non-deterministic tests - like using different / random topic names for each test run.
Also note that you'll need to manually stop/delete the container when you're done with it. Testcontainers will not remove the container for you anymore.
You can also read this article that describes how to do it from spring for inspiration: Reuse Containers With Testcontainers for Fast Integration Test

When using multiple QuarkusTestProfile's, the test context will be reseted completely, so that even static variables are not preserved.
I have not yet found a way to use e.g. a single database across all QuarkusTest's in one test when using different QuarkusTestProfiles.
Even if it seems that the container exists across test run boundaries, I can't reference it in later test runs to determine the mapped port, etc.

Related

Is there any way to get the c# object/data on which NUnit test is failing?

I am writing unit tests for a complex application which has so many rules to be checked into a single flow by using NUnit and Playwright in .Net5. Actually the case is, to save the time for writing the test scripts for Playwright (front-end testing tool), we have used a library named Bogus to create dummy data dynamically based on the rules (because the test cases has numerous rules to be checked and it was much more difficult to write fresh data to every case). I am using Playwright script into the NUnit test and providing the data source by using [TestCaseSource("MethodName")] to provide dynamic data object for different cases.
Now, we are facing a problem that some of the tests cases get passed and some are failed and we are unable to identify that particularly which test case is causing the problem because the testcase data is being provided by the dynamic source and in that source the data is being generated by the Bogus library on the bases of the rules which we have generated. Plus, we cannot look at the tests for a long time that's why we have automated the process.
[Test]
[TestCaseSource("GetDataToSubmit")]
public async Task Test_SubmitAssignmentDynamicFlow(Assignment assignment)
{
using var playwright = await Playwright.CreateAsync();
await using var browser = await playwright.Chromium.LaunchAsync(new BrowserTypeLaunchOptions
{
Headless = false,
...
});
....
private static IEnumerable<TestCaseData> GetDataToSubmit()
{
//creating data for simple job
var simpleAssignment = new DummyAssigmentGenerator()
....
.Generate();
yield return new TestCaseData(simpleAssignment);
....
Now, my question is, is there any way so that we can view that what were the actual values in the object in the failed case, when we see the whole report of the testcases? So that we can come to know that which certain values are causing problems and eventually fixed those.
Two approaches...
Assuming that DummyAssignmentGenerator is your own class, override its ToString() method to display whatever you would like to see. That string will become part of the name of the test case generated, like...
Test_SubmitAssignmentDynamicFlow(YOUR_STRING)
Apply a name to each TestCaseData item you yield using the SetName() fluent method. In that case, you are supplying the full display name of the test case, not just the part in parentheses. Use {m}(YOUR_STRING) in order to have it appear the same as in the first approach.
If you can use it, the first approach is clearly the simpler of the two.

How to mock an inlined InitiatingFlow return value during another flow

I have SomeBigFlow that calls multiple subflows inside it i.e ValidateFlowA, ValidateFlowB. Assuming it is mandatory for A and B to be initiating flows not functions.
How do I mock a return value for ValidateFlowA when I run the SomeBigFlow in Junit?
I've seen some references to using registerAnswer to mock flows' return value here. I am also curious why this function is only available for InternalMockNetwork.MockNode but not MockNetwork.StartedMockNode which is typically used during junit testing)
I thought I could replicate it by having node[1].registerAnswer(ValidateFlowA.class, 20). But when I ran node[1].startFlow(SomeBigFlow).resultFuture.getOrThrow(), the ValidateFlowA is still using its default call implementation instead of returning the mocked 20 integer value. Maybe I'm using it wrong.
Any pointers on how to make this work or is there a solution to achieve mocking inlined subflows returned values? The only other way I can think of is have a rule of thumb that whenever calling an inlined subflow, put them in an open fun that can be overridden during mocknetwork testing - this makes inlined subflow tedious, hoping for a neater way.
For now, you'd have to use a similar approach to the one outlined here: Corda with mockito_kotlin for unit test.
In summary:
Make the FlowLogic class you are testing open, and move the call to the subflow into an open method
For testing, create a subclass of the open FlowLogic class where you override the open method to return a dummy result
Use this subclass in testing

Separating data and test settings in QTests

I am currently using following pattern when creating tests with QTest.
One test class per production class.
If a class has some 'global' setting run the test class multiple times with each such setting.
Each production class method has one test method.
Each test method has _data method.
Each _data method specify settings and data to be used and names the cases.
This last point somewhat bothers me because I am not passing just data but also data to be used for initialising that particular test. Sometimes it looks weird and even though my tests are short they are not all that intuitive because of the initialisation logic.
The alternative pattern I know of is to split each test method (breaking my rule #3) based on this initialisation needs. On one hand it would eliminate a lot of _data test methods but it would also make the test classes much bigger and no longer easily relatable to the production class (the naming would help though). Most google tests are written like this.
Another alternative would be to use global state of the object much like I treat global settings. If the object is either valid or invalid then it would not be part of each _data method but rather setting of the test class that would run in either configuration.
My main concern is maintainability. With my current approach I sometimes struggle to understand the nuances of the settings I pass to the tests and I need some sensible way to separate them and not to burden myself even more by it.
For global settings you run the test class multiple times, so IMHO doing the same for local settings doesn't really "violate" your rule #3, it is more an extension of rule #2.
Alternatively you could make the initialization routine another thing that is part of the test data.
Something like
private slots:
void someMethodTest_data()
{
QTest::addColumn<QByteArray>("settings");
//....
QTest::addRow("case1") << "settings1" << ....
}
void someMethodTest()
{
Q_FETCH(QByteArray, settings);
const QByteArray initMethod = QTest::currentTestFuntion() + "_init_" + settings;
QMetaObject::invokeMethod(this, initMethod.constData(), Qt::DirectConnect);
// commence test
}
protected slots:
void someMethodTest_init_settings1();

how to unit testing cache layer

I have added a cache layer to my project . now I wonder if I could unit test methods that manipulate cache ? or is there a better way to test Layer's logic ?
I just want to check the process , for example :
1- when the item is not in the cache , method should hit the database
2- the next time method should use cache
3- when a change is made to database , cache should be cleared
4- if data retrieved from databse is null , it shouldn't be added to cache
I want to ensure that the logic I have placed into the methods are working as expected .
I'm presuming the cache is a third party cache? If so, I would not test it. You're testing someone else's code otherwise.
If this caching is so important you need to test it, I'd go with an integration or acceptance test. In other words, hit the page(s)/service(s) in question and check the content that way. By the very definition of what you wish to test, this is not a unit test.
On the flip side, if the cache is one you've rolled yourself, you'll easily be able to unit test the functionality. You might want to check out verification based testing in order to test the behavior of the cache, as apposed to actually checking stuff is added/removed from the cache. Check out mocking for ways to achieve this.
To test for behaviour via Mock objects (or something similar) I'd do the following - although your code will vary.
class Cacher
{
public void Add(Thing thing)
{
// Complex logic here...
}
public Thing Get(int id)
{
// More complex logic here...
}
}
void DoStuff()
{
var cacher = new Cacher();
var thing = cacher.Get(50);
thing.Blah();
}
To test the above method I'd have a test which used a mock Cacher. You'd need to pass this into the method at runtime or inject the dependency into the constructor. From here the test would simply check that cache.Get(50) is invoked. Not that the item is actually retrieved from the cache. This is testing the behavior of how the cacher should be used, not that it is actually caching/retrieving anything.
You could then fall back to state based testing for the Cacher in isolation. E.g you add/remove items.
Like I said previously, this may be overkill depending on what you wish to do. However you seem pretty confident that the caching is important enough to warrant this sort of testing. In my code I try to limit mock objects as much as possible, though this sounds like a valid use case.

why would I unit test business layer in MVP

i created the following sample method in business logic layer. my database doesn't allow nulls for name and parent columns:
public void Insert(string catName, long catParent)
{
EntityContext con = new EntityContext();
Category cat = new Category();
cat.Name = catName;
cat.Parent = catParent;
con.Category.AddObject(cat);
con.SaveChanges();
}
so i unit test this and test for empty name and empty parent will fail. to get around that issue i have to refactor the Insert mathod as following:
public void Insert(string catName, long catParent)
{
//added to pass the test
if(string.IsNullOrEmpty(catName)) throw new InvalidOperationException("wrong action. name is empty.");
long parent;
if(long.TryParse(catParent, out parent) == false) throw new InvalidOperationException("wrong action. parent didn't parsed.");
//real bussiness logic
EntityContext con = new EntityContext();
Category cat = new Category();
cat.Name = catName;
cat.Parent = parent;
con.Category.AddObject(cat);
con.SaveChanges();
}
my entire bussiness layer are simple calls to database. so now i'm validating the data again! i already planned to do my validation in UI and test that kind of stuff in UI test units. what should i test in my bussiness logic method other than validation related tasks? and if there is nothing to be unit tested why everybody says "unit test all the layers" and things like that which i found a lot online?
The techniques involved in testing are those that you break down your program into smaller parts (smaller components or even classes) and test those small parts. As you assemble those parts together, you make less comprehensive tests -- the smaller parts are already proven to work -- until you have a functional, tested program, which then you give to users for "user tests".
It's preferable to test smaller parts because:
It's simpler to write the tests. You'll need less data, you only setup one object, you have to inject less dependencies.
It's easier to figure out what to test. You know the failing conditions from a simple reading of the code (or, better yet, from the technical specification).
Now, how can you guarantee that you business layer, simple as it's, is correctly implemented? Even a simple database insert can fail if badly written. Besides, how can you protected yourself from changes? Right know, the code works, but what will happen in the future if the database is changed or someone update the business logic.
However, and this is important, you actually don't need to test everything. Use your intuition (which is also called experience) to understand what needs testing and what doesn't. If you method is simple enough, just make sure the client code is correctly tested.
Finally, you've said that all your validation will occur in the UI. The business layer should be able to validate the data in order to increase reuse in your application. Fail to do that and whenever you or whoever make changes in your code in the future might create new UI and forget to add the required validations.

Resources