In one of our projects we are experiencing random failures of tests, when re-running them more often than not they succeed again.
We're using Laravel 5.8, with phpunit for tests, and in our tests we often seed the database using factories and Faker.
So I was wondering, is it possible to, at the end of the tests (especially when they fail), to print the seed used by Faker to generate the values? So I could then set this seed before running the tests, so I can reproduce the errors?
After some more checking I found that faker uses mt_rand internally.
So what I did, was set the seed at the start of each test in setUp and then print it in tearDown:
abstract class TestCase extends BaseTestCase
{
protected $seed;
protected function tearDown(): void
{
// Print seed so we know which one was used for this failing test
if ($this->hasFailed()) {
echo $this->seed;
}
}
protected function setUp(): void
{
// Seed the random seed for this test, so we can easily reproduce the same result, used seed will be printed when test fails
$this->seed = rand();
mt_srand($this->seed);
}
}
Then it will print the seed of each failed test, and to reproduce while debugging simply replace $this->seed = rand(); with $this->seed = the seed;.
Related
I have the following code:
private function registerShutdownFunction(): void
{
register_shutdown_function(function () {
$this->dropDatabasesAndUsersIfExist();
});
}
And this code:
private function dropDatabasesAndUsersIfExist(): void
{
// some code for deletion of the databases...
foreach ($connections as $connection) {
$this->assertNotContains($connection, $databases);
}
}
But dropDatabasesAndUsersIfExist is not a "test..." method. And phpunit ignores assertions outside of the test methods.
And seems there are problems may occur, because this shutdown function running directly before the die of the script...
You can use PHPUnit's Assert class outside of test cases if that is really what you want to do:
PHPUnit\Framework\Assert::assertNotContains($connection, $databases);
Edit: After reading your question one more time I'm not really sure if my answer helps you. If I got you right, you are already using the assertion but it did not behave as you'd expect it. My guess is that you want the whole test run to fail if any of the assertions in dropDatabasesAndUsersIfExist was not met.
One solution could be to move the checks you are doing in dropDatabasesAndUsersIfExist to a separate test class that should be executed last. You can achieve this by appending another test suite with the new class right after your test suite(s).
I've got a method that contains logic that varies between release mode and debug mode. It is not advanced at all, but i still want a unit test for it, since my application will be used in a bigger picture and i want to redirect the user to other sites if it is not used in release mode.
And now to my question, is there any way to force the unit test to run in Release Mode? I don't want to manually change build configuration for every time i want to run my unit tests.
Instead of running your unit tests in Release mode you can create a test seam so you can control which behavior you want to elicit. You might be able to something like this:
public class Foo {
public int Bar() {
if (IsDebugModeEnabled()) {
return 1;
} else {
return 0;
}
}
public boolean IsDebugModeEnabled() {
#if DEBUG
return true;
#else
return false;
#endif
}
}
This way you have a couple of options to test both paths of your logic. You can create a subclass Foo and override IsDebugModeEnabled or use a partial mock to directly set the return value.
My reading of the TestNG docs suggests that if I have a test method marked like this:
#BeforeSuite(alwaysRun = true)
#Test
public void MyTestMethod { ... }
then MyTestMethod would run before any other test defined anywhere, regardless of class, suite or group. But that does not seem to be the case.
Is there a way to make a test method run unconditionally before everything else? (And that if it fails no other tests will be run.)
Edit:
The test class:
class Suite_Setup
extends BaseTestSuite
{
#BeforeSuite(alwaysRun = true)
def setup() {
System.out.println("Conducting test suite setup...")
// Verify that the internal API is configured properly and that the API host is available...
new Action(ApiHostname, new BasicCookieStore)
}
}
Edit:
The answer:
We generate our own TestNG.xml files (automatically) and the #BeforeSuite method was not being included in it. Once it was included, #BeforeSuite had the expected effect.
However, it does appear that both #BeforeSuite (and presumably other #Before... and #After... annotations) can be mixed with #Test and rather than inhibiting the execution of the annotated method, they cause it to run more than once!
Also, I remiss in not indicating which version of TestNG I'm using. It's 6.2.
Try using groups either on class level or on method level.
In my case, I have a set of smoke tests that need to run before everything and if any of those fail, no other tests should run.
#Test(groups="smoke")
public class SmokeTests {
public void test1 {
//
}
...
}
And all other classes with tests are:
#Test(dependsOnGroups = "smoke")
public class OtherTests {
public void test1 {
//
}
...
}
I think that the answer is
to separate the configuration-methods from the test-methods,
to use #BeforeSuite for the method to be executed befor all tests in the suite (for example to get an EntityManager)
to use dependsOnMethods to decide in which order the tests shall be run
In the following example the testRemove will run after testInsert, testReadAll and testUpdate have run.
testReadAll will run after testInsert has run.
testUpdate will run after testInsert has run.
Note that there is no dependency between testReadAll and testUpdate.
#Test
public void testInsert()
{..}
#Test(dependsOnMethods={"testInsert"})
public void testUpdate()
{..}
#Test(dependsOnMethods={"testInsert"})`
public void testReadAll()`
{..}
#Test(dependsOnMethods={"testInsert", "testUpdate", "testReadAll"})`
public void testRemove()`
{..}
Remove #Test, a method cannot be both a configuration and a test method.
#Test (alwaysRun=True)
makes the test always run irrespective of methods or groups it depends on and even it is failed
Is there a way to mark a test method so it will unconditionally be run before everything else?
Just try to assign to target test the lowest priority value.
#Test(priority = -2147483648)
public void myTest() {
...
}
You can read more about TestNG test priority over there.
And that if it fails no other tests will be run.
You need to make other tests depending on this test method by using one of the following options:
Assign to your first method some group and mark other tests by dependsOnGroup.
Mark other tests by dependsOnMethod.
If you will use one of the dependency options you do not need to provide priority for your first method.
How do you prevent a CakePHP 2.0 test case, which extends CakeTestCase (uses PHPUnit), from reloading fixtures between tests?
Background: There is a set of integration tests which we have written with CakePHP 2.0 using PHPUnit. We have extended the test case class from the standard CakeTestCase. For this set of tests, we have a bunch of fixtures setup to populate the data from the database. Naturally, these tests take a long time to run. Basically, all of the time is coming from Cake unloading and re-loading all of the fixtures between tests.
All of the tests act as READ only. We are just issuing find calls to the database and testing the logic among a set of class interactions based on those results. In fact, the tests can be boiled down to:
class ALongRunningTest extends CakeTestCase {
public $fixtures = array('app.class1', 'app.class2', ... 'app.class8');
/**
* #dataProvider provider
* #test
*/
public function checkCompositionLogic($val1, $val2, $val3) {
// internally calls class1 and class3
$data = $this->ModelX->generateComplexStructure($val1);
// internally calls other classes & models, which touch the
// other loaded fixtures
$results = $this->ModelY->checkAllWhichApply($val2, $data);
$this->assertEquals($val3, $results);
}
public function provider() {
return array(
array(stuff, stuff1, stuff2),
array(x_stuff, x_stuff1, x_stuff2),
array(y_stuff, y_stuff1, y_stuff2),
array(z_stuff, z_stuff1, z_stuff2),
array(a_stuff, a_stuff1, a_stuff2),
// More test cases
);
}
}
I've not been able to find anything on how to prevent this. I saw in the CakeTestCase class a public variable autoFixtures with a comment that says if you change it to false it won't load the fixtures. It makes a note stating that you have to load them manually. However, I see no documentation on how to actually load them manually.
Strictly speaking, CakePHP is correct in the way that it works. Tests shouldn't be dependent upon each other, so that database is reset between each test case. You could even argue that it should be reset between each test method, but the overhead would be even more noticeable.
However, since you're doing read only actions on the database, you could remove all the references to fixtures in your test cases and set up your database entries before you run the test suite (e.g. import it from an SQL file).
Or you could create a custom test suite that adds a whole load of data, e.g:
class AllTest extends CakeTestSuite {
public static function suite() {
self::loadDB();
$suite = new CakeTestSuite('All tests');
$suite->addTestDirectoryRecursive(TESTS . 'Case');
return $suite;
}
public static function loadDB() {
//Do some set up here using your models
}
}
The advantage of doing it that was is that if you ever had to introduce tests that do write to the database, you could run them under a separate test suite.
I'm noticing when I use mock objects, PHPUnit will correctly report the number of tests executed but incorrectly report the number of assertions I'm making. In fact, every time I mock it counts as another assertion. A test file with 6 tests, 7 assert statements and each test mocking once reported 6 tests, 13 assertions.
Here's the test file with all but one test removed (for illustration here), plus I introduced another test which doesn't stub to track down this problem. PHPUnit reports 2 tests, 3 assertions. I remove the dummy: 1 test, 2 assertions.
require_once '..\src\AntProxy.php';
class AntProxyTest extends PHPUnit_Framework_TestCase {
const sample_client_id = '495d179b94879240799f69e9fc868234';
const timezone = 'Australia/Sydney';
const stubbed_ant = "stubbed ant";
const date_format = "Y";
public function testBlankCategoryIfNoCacheExists() {
$cat = '';
$cache_filename = $cat.'.xml';
if (file_exists($cache_filename))
unlink($cache_filename);
$stub = $this->stub_Freshant($cat);
$expected_output = self::stubbed_ant;
$actual_output = $stub->getant();
$this->assertEquals($expected_output, $actual_output);
}
public function testDummyWithoutStubbing() {
$nostub = new AntProxy(self::sample_client_id, '', self::timezone, self::date_format);
$this->assertTrue(true);
}
private function stub_FreshAnt($cat) {
$stub = $this->getMockBuilder('AntProxy')
->setMethods(array('getFreshAnt'))
->setConstructorArgs(array(self::sample_client_id, $cat, self::timezone, self::date_format))
->getMock();
$stub->expects($this->any())
->method('getFreshAnt')
->will($this->returnValue(self::stubbed_ant));
return $stub;
}
}
It's like an assert has been left in one of the framework's mocking methods. Is there a way to show every (passing) assertion being made?
After each test method completes, PHPUnit verifies the mock expectations setup during the test. PHPUnit_Framework_TestCase::verifyMockObjects() increments the number of assertions for each mock object created. You could override the method to undo this if you really want by storing the current number of assertions, calling the parent method, and subtracting the difference.
protected function verifyMockObjects()
{
$count = $this->getNumAssertions();
parent::verifyMockObjects();
$this->addToAssertionCount($count - $this->getNumAssertions());
}
Of course, verifyMockObjects() will throw an assertion failure exception if any expectation is unsatisfied, so you'll need to catch the exception and rethrow it after resetting the count. I'll leave that to you. :)