How can I break up a long, multi-step integration test? - integration-testing

I have a really long integration test that simulates a sequential process involving many different interactions with a couple of Java servlets. The servlets' behavior depends on the values of the parameters being posted in the request, so I wanted to test every permutation to make sure my servlets are behaving as expected.
Currently, my integration test is in one long function called "testServletFunctionality()" that goes something like this:
//Configure a mock request
//Post request to servlet X
//Check database for expected changes
//Re-configure mock request
//Re-post request to servlet X
//Check database for expected changes
//Re-configure mock request
//Post request to servlet Y
//Check database for expected changes
...
and each configure/post/check step has about 20 lines of code, so the function is very long.
What is the proper way to break up or organize a long, sequential, repetitive integration tests like this?

The main problem with integration tests (IT) is usually that the setup is very expensive. Tests usually should not depend on each other and the order in which they are executed but for ITs, test #2 will always fail if you don't run test #1 (login).
Sad.
The solution is to treat these tests like production code: Split long methods into several smaller ones, build helper objects that perform certain operations, so you can do this in your test:
#Test
public void someComplexText() throws Exception {
new LoginHelper().loginAsAdmin();
....
}
or move this code into a base test class:
#Test
public void someComplexText() throws Exception {
loginHelper().loginAsAdmin();
....
}

Related

Is there any way to get the c# object/data on which NUnit test is failing?

I am writing unit tests for a complex application which has so many rules to be checked into a single flow by using NUnit and Playwright in .Net5. Actually the case is, to save the time for writing the test scripts for Playwright (front-end testing tool), we have used a library named Bogus to create dummy data dynamically based on the rules (because the test cases has numerous rules to be checked and it was much more difficult to write fresh data to every case). I am using Playwright script into the NUnit test and providing the data source by using [TestCaseSource("MethodName")] to provide dynamic data object for different cases.
Now, we are facing a problem that some of the tests cases get passed and some are failed and we are unable to identify that particularly which test case is causing the problem because the testcase data is being provided by the dynamic source and in that source the data is being generated by the Bogus library on the bases of the rules which we have generated. Plus, we cannot look at the tests for a long time that's why we have automated the process.
[Test]
[TestCaseSource("GetDataToSubmit")]
public async Task Test_SubmitAssignmentDynamicFlow(Assignment assignment)
{
using var playwright = await Playwright.CreateAsync();
await using var browser = await playwright.Chromium.LaunchAsync(new BrowserTypeLaunchOptions
{
Headless = false,
...
});
....
private static IEnumerable<TestCaseData> GetDataToSubmit()
{
//creating data for simple job
var simpleAssignment = new DummyAssigmentGenerator()
....
.Generate();
yield return new TestCaseData(simpleAssignment);
....
Now, my question is, is there any way so that we can view that what were the actual values in the object in the failed case, when we see the whole report of the testcases? So that we can come to know that which certain values are causing problems and eventually fixed those.
Two approaches...
Assuming that DummyAssignmentGenerator is your own class, override its ToString() method to display whatever you would like to see. That string will become part of the name of the test case generated, like...
Test_SubmitAssignmentDynamicFlow(YOUR_STRING)
Apply a name to each TestCaseData item you yield using the SetName() fluent method. In that case, you are supplying the full display name of the test case, not just the part in parentheses. Use {m}(YOUR_STRING) in order to have it appear the same as in the first approach.
If you can use it, the first approach is clearly the simpler of the two.

How to make command to wait until all events triggered against it are completed successfully

I have came across a requirement where i want axon to wait untill all events in the eventbus fired against a particular Command finishes their execution. I will the brief the scenario:
I have a RestController which fires below command to create an application entity:
#RestController
class myController{
#PostMapping("/create")
#ResponseBody
public String create(
org.axonframework.commandhandling.gateway.CommandGateway.sendAndWait(new CreateApplicationCommand());
System.out.println(“in myController:: after sending CreateApplicationCommand”);
}
}
This command is being handled in the Aggregate, The Aggregate class is annotated with org.axonframework.spring.stereotype.Aggregate:
#Aggregate
class MyAggregate{
#CommandHandler //org.axonframework.commandhandling.CommandHandler
private MyAggregate(CreateApplicationCommand command) {
org.axonframework.modelling.command.AggregateLifecycle.apply(new AppCreatedEvent());
System.out.println(“in MyAggregate:: after firing AppCreatedEvent”);
}
#EventSourcingHandler //org.axonframework.eventsourcing.EventSourcingHandler
private void on(AppCreatedEvent appCreatedEvent) {
// Updates the state of the aggregate
this.id = appCreatedEvent.getId();
this.name = appCreatedEvent.getName();
System.out.println(“in MyAggregate:: after updating state”);
}
}
The AppCreatedEvent is handled at 2 places:
In the Aggregate itself, as we can see above.
In the projection class as below:
#EventHandler //org.axonframework.eventhandling.EventHandler
void on(AppCreatedEvent appCreatedEvent){
// persists into database
System.out.println(“in Projection:: after saving into database”);
}
The problem here is after catching the event at first place(i.e., inside aggregate) the call gets returned to myController.
i.e. The output here is:
in MyAggregate:: after firing AppCreatedEvent
in MyAggregate:: after updating state
in myController:: after sending CreateApplicationCommand
in Projection:: after saving into database
The output which i want is:
in MyAggregate:: after firing AppCreatedEvent
in MyAggregate:: after updating state
in Projection:: after saving into database
in myController:: after sending CreateApplicationCommand
In simple words, i want axon to wait untill all events triggered against a particular command are executed completely and then return to the class which triggered the command.
After searching on the forum i got to know that all sendAndWait does is wait until the handling of the command and publication of the events is finalized, and then i tired with Reactor Extension as well using below but got same results: org.axonframework.extensions.reactor.commandhandling.gateway.ReactorCommandGateway.send(new CreateApplicationCommand()).block();
Can someone please help me out.
Thanks in advance.
What would be best in your situation, #rohit, is to embrace the fact you are using an eventually consistent solution here. Thus, Command Handling is entirely separate from Event Handling, making the Query Models you create eventually consistent with the Command Model (your aggregates). Therefore, you wouldn't necessarily wait for the events exactly but react when the Query Model is present.
Embracing this comes down to building your application such that "yeah, I know my response might not be up to date now, but it might be somewhere in the near future." It is thus recommended to subscribe to the result you are interested in after or before the fact you have dispatched a command.
For example, you could see this as using WebSockets with the STOMP protocol, or you could tap into Project Reactor and use the Flux result type to receive the results as they go.
From your description, I assume you or your business have decided that the UI component should react in the (old-fashioned) synchronous way. There's nothing wrong with that, but it will bite your *ss when it comes to using something inherently eventually consistent like CQRS. You can, however, spoof the fact you are synchronous in your front-end, if you will.
To achieve this, I would recommend using Axon's Subscription Query to subscribe to the query model you know will be updated by the command you will send.
In pseudo-code, that would look a little bit like this:
public Result mySynchronousCall(String identifier) {
// Subscribe to the updates to come
SubscriptionQueryResult<Result> result = QueryGateway.subscriptionQuery(...);
// Issue command to update
CommandGateway.send(...);
// Wait on the Flux for the first result, and then close it
return result.updates()
.next()
.map(...)
.timeout(...)
.doFinally(it -> result.close());
}
You could see this being done in this sample WebFluxRest class, by the way.
Note that you are essentially closing the door to the front-end to tap into the asynchronous goodness by doing this. It'll work and allow you to wait for the result to be there as soon as it is there, but you'll lose some flexibility.

Separating data and test settings in QTests

I am currently using following pattern when creating tests with QTest.
One test class per production class.
If a class has some 'global' setting run the test class multiple times with each such setting.
Each production class method has one test method.
Each test method has _data method.
Each _data method specify settings and data to be used and names the cases.
This last point somewhat bothers me because I am not passing just data but also data to be used for initialising that particular test. Sometimes it looks weird and even though my tests are short they are not all that intuitive because of the initialisation logic.
The alternative pattern I know of is to split each test method (breaking my rule #3) based on this initialisation needs. On one hand it would eliminate a lot of _data test methods but it would also make the test classes much bigger and no longer easily relatable to the production class (the naming would help though). Most google tests are written like this.
Another alternative would be to use global state of the object much like I treat global settings. If the object is either valid or invalid then it would not be part of each _data method but rather setting of the test class that would run in either configuration.
My main concern is maintainability. With my current approach I sometimes struggle to understand the nuances of the settings I pass to the tests and I need some sensible way to separate them and not to burden myself even more by it.
For global settings you run the test class multiple times, so IMHO doing the same for local settings doesn't really "violate" your rule #3, it is more an extension of rule #2.
Alternatively you could make the initialization routine another thing that is part of the test data.
Something like
private slots:
void someMethodTest_data()
{
QTest::addColumn<QByteArray>("settings");
//....
QTest::addRow("case1") << "settings1" << ....
}
void someMethodTest()
{
Q_FETCH(QByteArray, settings);
const QByteArray initMethod = QTest::currentTestFuntion() + "_init_" + settings;
QMetaObject::invokeMethod(this, initMethod.constData(), Qt::DirectConnect);
// commence test
}
protected slots:
void someMethodTest_init_settings1();

How to test service object methods are called?

I'm trying to build some tests for my service objects.
My service file is as follows...
class ExampleService
def initialize(location)
#location = coordinates(location)
end
private
def coordinates(location)
Address.locate(location)
end
end
I want to test that the private methods are called by the public methods. This is my code...
subject { ExampleService.new("London") }
it "receives location" do
expect(subject).to receive(:coordinates)
subject
end
But I get this error...
expected: 1 time with any arguments
received: 0 times with any arguments
How to test service object methods are called?
Short answer: Don't test at all
Long answer: After seen Sandi Metz advice on testing, you will be agree, and you will want to test the way she does.
This is the basic idea:
The public methods of your class (the public API), must be tested
The private methods don't need be tested
Summary of tests to do:
The incoming query methods, test the result
The incoming command methods, test the direct public side effects
The outgoing command methods, expect to send
Ignore: send to self, command to self, and queries to others
Taken from the slides of the conference.
In your first example, your subject has already been instantiated/initialized (by being passed to expect, invoking coordinates in the process) by the time you've set expectations on it, so there is no way for the expectation to receive :coordinates to succeed. Also, as an aside, subject is memoized, so there won't be an additional instantiation in the line that follows.
If you want to make sure your initialization calls a particular method, you could use the following:
describe do
subject { FoursquareService.new("London") }
it "receives coordinates" do
expect_any_instance_of(FoursquareService).to receive(:coordinates)
subject
end
end
See also Rails / RSpec: How to test #initialize method?

Unittesting a Save function using MVP pattern and RhinoMock

I am trying to get a better code coverage with my unittests, and recently I switched to using RhinoMock for my Mocking needs.
But I have got a question with how to write a specific unit-test, the Save() function.
I have an IView interface with several functions to retrieve values from the view (aspx page), examples are GetUsername(), GetPassword(), GetAddress() and GetCountry().
When the user clicks the submit button I want to have tests that tests if all these functions are actually being called. So I wrote this test:
[TestMethod]
public void MainController_Save_ShouldRetrieveLUsername()
{
//Initialize the IView and Controller
InitViewAndController();
//Trigger the Save function triggering the controller
//to collect information for storage
_controller.Save();
_view.AssertWasCalled(s => s.GetUsername(), o => o.Repeat.Once());
}
Now finally comes the question, considering the aspx contains 15 input fields that needs to be saved, is there a better way to test this behaviour that writing and maintaining 15 of these tests?
On one hand test should be simple and optimally only one Assert, but 15 of these functions feels like a waste.
Instead of testing whether these functions are called (they look more like properties instead of functions, BTW), you should check the results of the Save function. You should treat your test code more like a black box and try not to insert too much of its interior knowledge into your tests. This way your tests will be less brittle when the Save code changes.
Google for "state based testing".

Resources