One of my phpunit tests uses a dataset (that is provided by a dataprovider) that takes quite a long time to generate. When I exclude this test from execution, its dataprovider method is still executed.
How can I skip execution of a dataprovider?
If you're skipping the slow tests from inside the test, using $this->markTestSkipped(), you can't prevent the dataProvider to be executed.
This is because PHPUnit must get the data that will be passed to the tests, before executing them.
Possible solutions:
Set an environment variable from command-line (-d key[=value] option) that will be read both by the dataProvider and the slow test. The dataProvider will return a dummy array and the test will mark itself as skipped.
Mark the test as slow using the #group annotation and use the --exclude-group command line option so the slow group tests won't be executed
Refactoring the test so no dataProviders are used, assuming the inconvenience of not having the tests as distinct cases.
Related
I'm new to appium and I have created set of test cases and for every test case I use #BeforeTest and add my desired capabilities
is there a way I can just call them instead of writing them for every test?
I tried to add desired capabilities into a class and call it before tests but it didn't work
I'm using scalamock to write a test. The problem is that action is asynchronous.
I have the following pseudo code
val resultCollectorMock = mock[ResultCollector]
(resultCollectorMock.collectResult _).expect(someResult)
val serviceUnderTest = new ServiceUnderTest(resultColletorMock)
serviceUnderTest.runAsyncJob(someParams)
This fails because the result is computed asynchronously, at time when the test ends, it's still not ready, so collectResult wasn't called, yet.
What I want to have is expectEventually(value)(patienceConfig) that is able to wait for some time for the method to be called.
I tried to use a sutb and verify instead, I wrapped it in eventually from scalatest but to no avail. For whatever reason verify seems to break the test at first evaluation.
you should use AsyncMockFactory with an appropriate test suite and Futures as described in the docs at https://scalamock.org/user-guide/integration/
I have SomeBigFlow that calls multiple subflows inside it i.e ValidateFlowA, ValidateFlowB. Assuming it is mandatory for A and B to be initiating flows not functions.
How do I mock a return value for ValidateFlowA when I run the SomeBigFlow in Junit?
I've seen some references to using registerAnswer to mock flows' return value here. I am also curious why this function is only available for InternalMockNetwork.MockNode but not MockNetwork.StartedMockNode which is typically used during junit testing)
I thought I could replicate it by having node[1].registerAnswer(ValidateFlowA.class, 20). But when I ran node[1].startFlow(SomeBigFlow).resultFuture.getOrThrow(), the ValidateFlowA is still using its default call implementation instead of returning the mocked 20 integer value. Maybe I'm using it wrong.
Any pointers on how to make this work or is there a solution to achieve mocking inlined subflows returned values? The only other way I can think of is have a rule of thumb that whenever calling an inlined subflow, put them in an open fun that can be overridden during mocknetwork testing - this makes inlined subflow tedious, hoping for a neater way.
For now, you'd have to use a similar approach to the one outlined here: Corda with mockito_kotlin for unit test.
In summary:
Make the FlowLogic class you are testing open, and move the call to the subflow into an open method
For testing, create a subclass of the open FlowLogic class where you override the open method to return a dummy result
Use this subclass in testing
I'd like to prevent the following task from getting run multiple times when sbt is running:
val myTask = someSettings map {s => if !s.isDone doSomethingAndSetTheFlag}
So what's expected would be when myTask is run for the first time, isDone is false and something gets done in the task, and then the task sets the flag to true. But when the task is run for the second time, since the isDone flag is true, it skips the actual execution block.
The expected behavior is similar to compile -> when source is compiled, the task doesn't compile the code again the next time it's triggered until watchSource task says the code has been changed.
Is it possible? How?
This is done by sbt, a task will be evaluated only once within a single run. If you want to have a value evaluated once, at the project load time, you can change it to be a SettingKey.
This is documented in the sbt documentation (highlighting is mine):
As mentioned in the introduction, a task is evaluated on demand. Each
time sampleTask is invoked, for example, it will print the sum. If the
username changes between runs, stringTask will take different values
in those separate runs. (Within a run, each task is evaluated at
most once.) In contrast, settings are evaluated once on project load
and are fixed until the next reload.
What is the most efficient way to check if the current QTP test execution is interactive, i.e. not part of a QC test set execution launched from the QC test lab?
Do you guys know a cool way? WR used to have a batch run flag which reliably was cleared for all executions from within the IDE. Maybe QTP has something like this, and I overlooked it?
First, I thought about looking at the OnError property:
Set qtApp = getObject("","QuickTest.Application")
qtApp.Test.Settings.Run.OnError now returns one of these possible values:
"Dialog", "NextIteration", "Stop" or "NextStep".
This would allow me to look at the OnError setting, which probably is <> "Dialog" and <> "Stop" when execution is part of a test set, but:
I managed to avoid the automation interface in all my QTP tests, this would be my first exception (earlier QTP versions got confused and launched a second QTP instance, creating lots of problems...)
A tester might do an "interactive" run from within the QTP IDE with this setting set to "NextStep" or "NextIteration", which I then would misinterpret in my code.
It does not work, even if dialogs are not coming up (due to execution from a QC test set), the value returned is "Dialog". DOH!
No need to go to the automation object, it is exposed in the Setting object.
If Setting("IsInTestDirectorTest") Then
Print "Run from QC"
Else
Print "Not run from QC"
End If
Note that TestDirector (TD) is the historical name of QualityCenter (QC).
It might be an option to use
Public Function IsTestSetRun ()
Dim Result: Result=false
If not QCUtil is Nothing then
If not QCUtil.CurrentTestSetTest is Nothing then
Result=true
End If
End If
IsTestSetRun=Result
End Function
which is based on QCUtil.CurrentTestSetTest. Unfortunately, it returns true if you run a GUI test interactively, so it is not really a complete solution.
But since the other option does not work with BPT components, I am now using this option.